Artificial Intelligence Nanodegree

Computer Vision Capstone

Project: Facial Keypoint Detection


Welcome to the final Computer Vision project in the Artificial Intelligence Nanodegree program!

In this project, you’ll combine your knowledge of computer vision techniques and deep learning to build and end-to-end facial keypoint recognition system! Facial keypoints include points around the eyes, nose, and mouth on any face and are used in many applications, from facial tracking to emotion recognition.

There are three main parts to this project:

Part 1 : Investigating OpenCV, pre-processing, and face detection

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!


*Here's what you need to know to complete the project:

  1. In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested.

    a. Sections that begin with '(IMPLEMENTATION)' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section, and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!

  1. In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation.

    a. Each section where you will answer a question is preceded by a 'Question X' header.

    b. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. Markdown cells can be edited by double-clicking the cell to enter edit mode.

The rubric contains optional suggestions for enhancing the project beyond the minimum requirements. If you decide to pursue the "(Optional)" sections, you should include the code in this IPython notebook.

Your project submission will be evaluated based on your answers to each of the questions and the code implementations you provide.

Steps to Complete the Project

Each part of the notebook is further broken down into separate steps. Feel free to use the links below to navigate the notebook.

In this project you will get to explore a few of the many computer vision algorithms built into the OpenCV library. This expansive computer vision library is now almost 20 years old and still growing!

The project itself is broken down into three large parts, then even further into separate steps. Make sure to read through each step, and complete any sections that begin with '(IMPLEMENTATION)' in the header; these implementation sections may contain multiple TODOs that will be marked in code. For convenience, we provide links to each of these steps below.

Part 1 : Investigating OpenCV, pre-processing, and face detection

  • Step 0: Detect Faces Using a Haar Cascade Classifier
  • Step 1: Add Eye Detection
  • Step 2: De-noise an Image for Better Face Detection
  • Step 3: Blur an Image and Perform Edge Detection
  • Step 4: Automatically Hide the Identity of an Individual

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

  • Step 5: Create a CNN to Recognize Facial Keypoints
  • Step 6: Compile and Train the Model
  • Step 7: Visualize the Loss and Answer Questions

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!

  • Step 8: Build a Robust Facial Keypoints Detector (Complete the CV Pipeline)

Step 0: Detect Faces Using a Haar Cascade Classifier

Have you ever wondered how Facebook automatically tags images with your friends' faces? Or how high-end cameras automatically find and focus on a certain person's face? Applications like these depend heavily on the machine learning task known as face detection - which is the task of automatically finding faces in images containing people.

At its root face detection is a classification problem - that is a problem of distinguishing between distinct classes of things. With face detection these distinct classes are 1) images of human faces and 2) everything else.

We use OpenCV's implementation of Haar feature-based cascade classifiers to detect human faces in images. OpenCV provides many pre-trained face detectors, stored as XML files on github. We have downloaded one of these detectors and stored it in the detector_architectures directory.

Import Resources

In the next python cell, we load in the required libraries for this section of the project.

In [1]:
# Import required libraries for this section

%matplotlib inline

import numpy as np
import matplotlib.pyplot as plt
import math
import cv2                     # OpenCV library for computer vision
from PIL import Image
import time 

Next, we load in and display a test image for performing face detection.

Note: by default OpenCV assumes the ordering of our image's color channels are Blue, then Green, then Red. This is slightly out of order with most image types we'll use in these experiments, whose color channels are ordered Red, then Green, then Blue. In order to switch the Blue and Red channels of our test image around we will use OpenCV's cvtColor function, which you can read more about by checking out some of its documentation located here. This is a general utility function that can do other transformations too like converting a color image to grayscale, and transforming a standard color image to HSV color space.

In [2]:
# Load in color image for face detection
image = cv2.imread('images/test_image_1.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot our image using subplots to specify a size and title
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[2]:
<matplotlib.image.AxesImage at 0x9354b70>

There are a lot of people - and faces - in this picture. 13 faces to be exact! In the next code cell, we demonstrate how to use a Haar Cascade classifier to detect all the faces in this test image.

This face detector uses information about patterns of intensity in an image to reliably detect faces under varying light conditions. So, to use this face detector, we'll first convert the image from color to grayscale.

Then, we load in the fully trained architecture of the face detector -- found in the file haarcascade_frontalface_default.xml - and use it on our image to find faces!

To learn more about the parameters of the detector see this post.

In [3]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[3]:
<matplotlib.image.AxesImage at 0xa18dd68>

In the above code, faces is a numpy array of detected faces, where each row corresponds to a detected face. Each detected face is a 1D array with four entries that specifies the bounding box of the detected face. The first two entries in the array (extracted in the above code as x and y) specify the horizontal and vertical positions of the top left corner of the bounding box. The last two entries in the array (extracted here as w and h) specify the width and height of the box.


Step 1: Add Eye Detections

There are other pre-trained detectors available that use a Haar Cascade Classifier - including full human body detectors, license plate detectors, and more. A full list of the pre-trained architectures can be found here.

To test your eye detector, we'll first read in a new test image with just a single face.

In [4]:
# Load in color image for face detection
image = cv2.imread('images/james.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Plot the RGB image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)
Out[4]:
<matplotlib.image.AxesImage at 0xa1f0b00>

Notice that even though the image is a black and white image, we have read it in as a color image and so it will still need to be converted to grayscale in order to perform the most accurate face detection.

So, the next steps will be to convert this image to grayscale, then load OpenCV's face detector and run it with parameters that detect this face accurately.

In [5]:
# Convert the RGB  image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray, 1.25, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face Detection')
ax1.imshow(image_with_detections)
Number of faces detected: 1
Out[5]:
<matplotlib.image.AxesImage at 0xa24f240>

(IMPLEMENTATION) Add an eye detector to the current face detection setup.

A Haar-cascade eye detector can be included in the same way that the face detector was and, in this first task, it will be your job to do just this.

To set up an eye detector, use the stored parameters of the eye cascade detector, called haarcascade_eye.xml, located in the detector_architectures subdirectory. In the next code cell, create your eye detector and store its detections.

A few notes before you get started:

First, make sure to give your loaded eye detector the variable name

eye_cascade

and give the list of eye regions you detect the variable name

eyes

Second, since we've already run the face detector over this image, you should only search for eyes within the rectangular face regions detected in faces. This will minimize false detections.

Lastly, once you've run your eye detector over the facial detection region, you should display the RGB image with both the face detection boxes (in red) and your eye detections (in green) to verify that everything works as expected.

In [6]:
# Make a copy of the original image to plot rectangle detections
image_with_detections = np.copy(image)   

# Loop over the detections and draw their corresponding face detection boxes
for (x,y,w,h) in faces:
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h),(255,0,0), 3)  
    
# Do not change the code above this comment!

    
## Add eye detection, using haarcascade_eye.xml, to the current face detector algorithm
## Loop over the eye detections and draw their corresponding boxes in green on image_with_detections
# Extract the pre-trained face detector from an xml file
eye_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')
# Detect the faces in image
eyes = eye_cascade.detectMultiScale(gray, 1.20, 2)
# Print the number of faces detected in the image
print('Number of eyes detected:', len(eyes))

# For each eye, draw a green recatangle in the face.
for (x,y,w,h) in eyes:
    cv2.rectangle(image_with_detections, (x,y), (x+w, y+h), (0,255,0), 1)

# Plot the image with both faces and eyes detected
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Image with Face and Eye Detection')
ax1.imshow(image_with_detections)
Number of eyes detected: 2
Out[6]:
<matplotlib.image.AxesImage at 0xb986828>

(Optional) Add face and eye detection to your laptop camera

It's time to kick it up a notch, and add face and eye detection to your laptop's camera! Afterwards, you'll be able to show off your creation like in the gif shown below - made with a completed version of the code!

Notice that not all of the detections here are perfect - and your result need not be perfect either. You should spend a small amount of time tuning the parameters of your detectors to get reasonable results, but don't hold out for perfection. If we wanted perfection we'd need to spend a ton of time tuning the parameters of each detector, cleaning up the input image frames, etc. You can think of this as more of a rapid prototype.

The next cell contains code for a wrapper function called laptop_camera_face_eye_detector that, when called, will activate your laptop's camera. You will place the relevant face and eye detection code in this wrapper function to implement face/eye detection and mark those detections on each image frame that your camera captures.

Before adding anything to the function, you can run it to get an idea of how it works - a small window should pop up showing you the live feed from your camera; you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [7]:
### Add face and eye detection to this laptop camera function 
# Make sure to draw out all faces/eyes found in each frame on the shown video feed

import cv2
import time 

# wrapper function for face/eye detection with your laptop camera
def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep the video stream open
    while rval:
        # Plot the image from camera with all the face and eye detections marked
        #cv2.imshow("face detection activated", frame)
        gray = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
        face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
        faces = face_cascade.detectMultiScale(gray, 1.25, 6)
        frame_with_det = np.copy(frame)
        for(x,y,w,h) in faces:
            cv2.rectangle(frame_with_det, (x,y), (x+w,y+h),(0,0,255), 2)  
        eyes_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_eye.xml')
        eyes = eyes_cascade.detectMultiScale(gray, 1.15, 2)
        for(x,y,w,h) in eyes:
            cv2.rectangle(frame_with_det, (x,y), (x+w, y+h), (0, 255,0), 2)
        '''
        nose_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_mcs_nose.xml')
        noses = nose_cascade.detectMultiScale(gray, 1.05,3)
        for(x,y,w,h) in noses:
            cv2.rectangle(frame_with_det, (x,y), (x+w, y+h), (255, 0, 0), 1)
        '''
        cv2.imshow("face detection activated", frame_with_det)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        #if key > 0: # Exit by pressing any key
        if key == ord('0'): # Exit by pressing the '0' key
            
            # Destroy windows 
            cv2.destroyAllWindows()
            
            # Make sure window closes on OSx
            for i in range (1,5):
                cv2.waitKey(1)
            return  frame_with_det
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
In [27]:
# Call the laptop camera face/eye detector function above
snapped_img = laptop_camera_go()
In [28]:
snapped_img = cv2.cvtColor(snapped_img, cv2.COLOR_BGR2RGB)
# plot the snapped image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Snapped face/eyes detected image')
ax1.imshow(snapped_img, cmap='gray')
Out[28]:
<matplotlib.image.AxesImage at 0xcc50630>

Step 2: De-noise an Image for Better Face Detection

Image quality is an important aspect of any computer vision task. Typically, when creating a set of images to train a deep learning network, significant care is taken to ensure that training images are free of visual noise or artifacts that hinder object detection. While computer vision algorithms - like a face detector - are typically trained on 'nice' data such as this, new test data doesn't always look so nice!

When applying a trained computer vision algorithm to a new piece of test data one often cleans it up first before feeding it in. This sort of cleaning - referred to as pre-processing - can include a number of cleaning phases like blurring, de-noising, color transformations, etc., and many of these tasks can be accomplished using OpenCV.

In this short subsection we explore OpenCV's noise-removal functionality to see how we can clean up a noisy image, which we then feed into our trained face detector.

Create a noisy image to work with

In the next cell, we create an artificial noisy version of the previous multi-face image. This is a little exaggerated - we don't typically get images that are this noisy - but image noise, or 'grainy-ness' in a digitial image - is a fairly common phenomenon.

In [8]:
# Load in the multi-face test image again
image = cv2.imread('images/test_image_1.jpg')

# Convert the image copy to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Make an array copy of this image
image_with_noise = np.asarray(image)

# Create noise - here we add noise sampled randomly from a Gaussian distribution: a common model for noise
noise_level = 40
noise = np.random.randn(image.shape[0],image.shape[1],image.shape[2])*noise_level

# Add this noise to the array image copy
image_with_noise = image_with_noise + noise

# Convert back to uint8 format
image_with_noise = np.asarray([np.uint8(np.clip(i,0,255)) for i in image_with_noise])

# Plot our noisy image!
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image')
ax1.imshow(image_with_noise)
Out[8]:
<matplotlib.image.AxesImage at 0xb9f24a8>

In the context of face detection, the problem with an image like this is that - due to noise - we may miss some faces or get false detections.

In the next cell we apply the same trained OpenCV detector with the same settings as before, to see what sort of detections we get.

In [30]:
# Convert the RGB  image to grayscale
gray_noise = cv2.cvtColor(image_with_noise, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray_noise, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(image_with_noise)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Noisy Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 11
Out[30]:
<matplotlib.image.AxesImage at 0xb2b9160>

With this added noise we now miss one of the faces!

(IMPLEMENTATION) De-noise this image for better face detection

Time to get your hands dirty: using OpenCV's built in color image de-noising functionality called fastNlMeansDenoisingColored - de-noise this image enough so that all the faces in the image are properly detected. Once you have cleaned the image in the next cell, use the cell that follows to run our trained face detector over the cleaned image to check out its detections.

You can find its official documentation here and a useful example here.

Note: you can keep all parameters except photo_render fixed as shown in the second link above. Play around with the value of this parameter - see how it affects the resulting cleaned image.

In [31]:
## Use OpenCV's built in color image de-noising function to clean up our noisy image!
denoised_image = np.copy(image_with_noise)

denoised_image = cv2.fastNlMeansDenoisingColored(denoised_image, None, 16.5, 16.5, 7, 21 )

#denoised_image = # your final de-noised image (should be RGB)
# Plot our denoised image!
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Denoised Image')
ax1.imshow(denoised_image)
Out[31]:
<matplotlib.image.AxesImage at 0xb313198>
In [32]:
## Run the face detector on the de-noised image to improve your detections and display the result
# Convert the RGB  image to grayscale
gray_noise = cv2.cvtColor(denoised_image, cv2.COLOR_RGB2GRAY)

# Extract the pre-trained face detector from an xml file
face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

# Detect the faces in image
faces = face_cascade.detectMultiScale(gray_noise, 4, 6)

# Print the number of faces detected in the image
print('Number of faces detected:', len(faces))

# Make a copy of the orginal image to draw face detections on
image_with_detections = np.copy(denoised_image)

# Get the bounding box for each detected face
for (x,y,w,h) in faces:
    # Add a red bounding box to the detections image
    cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 3)
    

# Display the image with the detections
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Denoised Image with Face Detections')
ax1.imshow(image_with_detections)
Number of faces detected: 13
Out[32]:
<matplotlib.image.AxesImage at 0xb32b6d8>

Step 3: Blur an Image and Perform Edge Detection

Now that we have developed a simple pipeline for detecting faces using OpenCV - let's start playing around with a few fun things we can do with all those detected faces!

Importance of Blur in Edge Detection

Edge detection is a concept that pops up almost everywhere in computer vision applications, as edge-based features (as well as features built on top of edges) are often some of the best features for e.g., object detection and recognition problems.

Edge detection is a dimension reduction technique - by keeping only the edges of an image we get to throw away a lot of non-discriminating information. And typically the most useful kind of edge-detection is one that preserves only the important, global structures (ignoring local structures that aren't very discriminative). So removing local structures / retaining global structures is a crucial pre-processing step to performing edge detection in an image, and blurring can do just that.

Below is an animated gif showing the result of an edge-detected cat taken from Wikipedia, where the image is gradually blurred more and more prior to edge detection. When the animation begins you can't quite make out what it's a picture of, but as the animation evolves and local structures are removed via blurring the cat becomes visible in the edge-detected image.

Edge detection is a convolution performed on the image itself, and you can read about Canny edge detection on this OpenCV documentation page.

Canny edge detection

In the cell below we load in a test image, then apply Canny edge detection on it. The original image is shown on the left panel of the figure, while the edge-detected version of the image is shown on the right. Notice how the result looks very busy - there are too many little details preserved in the image before it is sent to the edge detector. When applied in computer vision applications, edge detection should preserve global structure; doing away with local structures that don't help describe what objects are in the image.

In [33]:
# Load in the image
image = cv2.imread('images/fawzia.jpg')

# Convert to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Convert to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)  

# Perform Canny edge detection
edges = cv2.Canny(gray,100,200)

# Dilate the image to amplify edges
edges = cv2.dilate(edges, None)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image')
ax1.imshow(image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(edges, cmap='gray')
Out[33]:
<matplotlib.image.AxesImage at 0xb39c9e8>

Without first blurring the image, and removing small, local structures, a lot of irrelevant edge content gets picked up and amplified by the detector (as shown in the right panel above).

(IMPLEMENTATION) Blur the image then perform edge detection

In the next cell, you will repeat this experiment - blurring the image first to remove these local structures, so that only the important boudnary details remain in the edge-detected image.

Blur the image by using OpenCV's filter2d functionality - which is discussed in this documentation page - and use an averaging kernel of width equal to 4.

In [9]:
### Blur the test imageusing OpenCV's filter2d functionality, 
# Use an averaging kernel, and a kernel width equal to 4
kernel = np.ones((4,4),np.float32)/16
image = cv2.imread('images/fawzia.jpg')
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY) 

blurred_image = cv2.filter2D(gray,-1,kernel)

## TODO: Then perform Canny edge detection and display the output
blurred_edges = cv2.Canny(blurred_image,100,200)

# Plot the RGB and edge-detected image
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(121)
ax1.set_xticks([])
ax1.set_yticks([])

ax1.set_title('Original Image - Blurred')
ax1.imshow(blurred_image)

ax2 = fig.add_subplot(122)
ax2.set_xticks([])
ax2.set_yticks([])

ax2.set_title('Canny Edges')
ax2.imshow(blurred_edges, cmap='gray')
Out[9]:
<matplotlib.image.AxesImage at 0xbad0fd0>

Step 4: Automatically Hide the Identity of an Individual

If you film something like a documentary or reality TV, you must get permission from every individual shown on film before you can show their face, otherwise you need to blur it out - by blurring the face a lot (so much so that even the global structures are obscured)! This is also true for projects like Google's StreetView maps - an enormous collection of mapping images taken from a fleet of Google vehicles. Because it would be impossible for Google to get the permission of every single person accidentally captured in one of these images they blur out everyone's faces, the detected images must automatically blur the identity of detected people. Here's a few examples of folks caught in the camera of a Google street view vehicle.

Read in an image to perform identity detection

Let's try this out for ourselves. Use the face detection pipeline built above and what you know about using the filter2D to blur and image, and use these in tandem to hide the identity of the person in the following image - loaded in and printed in the next cell.

In [10]:
# Load in the image
image = cv2.imread('images/gus.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[10]:
<matplotlib.image.AxesImage at 0xbb66c88>

(IMPLEMENTATION) Use blurring to hide the identity of an individual in an image

The idea here is to 1) automatically detect the face in this image, and then 2) blur it out! Make sure to adjust the parameters of the averaging blur filter to completely obscure this person's identity.

In [36]:
## Implement face detection
# Convert the RGB  image to grayscale

def blur_faces(img):
    
    gray = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)

    # Extract the pre-trained face detector from an xml file
    face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')

    # Detect the faces in image
    faces = face_cascade.detectMultiScale(gray, 1.30, 10)

    # Make a copy of the orginal image to draw face detections on
    image_with_detections = np.copy(img)

    # Get the bounding box for each detected face
    for (x,y,w,h) in faces:
        # Add a red bounding box to the detections image
        #cv2.rectangle(image_with_detections, (x,y), (x+w,y+h), (255,0,0), 6)
        clip = image_with_detections[y:y+h, x:x+w]
    
        ## Blur the bounding box around each detected face using an averaging filter and display the result
        # Use an averaging kernel, and a kernel width equal to 4
        blur_factor = 75
        kernel = np.ones((blur_factor,blur_factor),np.float32)/(blur_factor*blur_factor)
        blurred_clip = cv2.filter2D(clip,-1,kernel)
        image_with_detections[y:y+h, x:x+w] = blurred_clip
    
    return image_with_detections

image_with_detections = blur_faces(image)

# Display the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Image with blurred out faces')
ax1.imshow(image_with_detections)
Out[36]:
<matplotlib.image.AxesImage at 0xb57e978>

(Optional) Build identity protection into your laptop camera

In this optional task you can add identity protection to your laptop camera, using the previously completed code where you added face detection to your laptop camera - and the task above. You should be able to get reasonable results with little parameter tuning - like the one shown in the gif below.

As with the previous video task, to make this perfect would require significant effort - so don't strive for perfection here, strive for reasonable quality.

The next cell contains code a wrapper function called laptop_camera_identity_hider that - when called - will activate your laptop's camera. You need to place the relevant face detection and blurring code developed above in this function in order to blur faces entering your laptop camera's field of view.

Before adding anything to the function you can call it to get a hang of how it works - a small window will pop up showing you the live feed from your camera, you can press any key to close this window.

Note: Mac users may find that activating this function kills the kernel of their notebook every once in a while. If this happens to you, just restart your notebook's kernel, activate cell(s) containing any crucial import statements, and you'll be good to go!

In [37]:
### Insert face detection and blurring code into the wrapper below to create an identity protector on your laptop!
import cv2
import time 

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep video stream open
    while rval:
        #Blur the faces within the image.
        blurred_frame = blur_faces(frame)
        # Plot image from camera with detections marked
        cv2.imshow("face detection activated", blurred_frame)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key == ord('0'): # Exit by pressing the '0' key
            # Destroy windows
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return blurred_frame

        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [38]:
# Run laptop identity hider
snapped_img = laptop_camera_go()
In [39]:
snapped_img = cv2.cvtColor(snapped_img, cv2.COLOR_BGR2RGB)
# plot the snapped image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Image with masked identity')
ax1.imshow(snapped_img, cmap='gray')
Out[39]:
<matplotlib.image.AxesImage at 0xcf52f60>

Step 5: Create a CNN to Recognize Facial Keypoints

OpenCV is often used in practice with other machine learning and deep learning libraries to produce interesting results. In this stage of the project you will create your own end-to-end pipeline - employing convolutional networks in keras along with OpenCV - to apply a "selfie" filter to streaming video and images.

You will start by creating and then training a convolutional network that can detect facial keypoints in a small dataset of cropped images of human faces. We then guide you towards OpenCV to expanding your detection algorithm to more general images. What are facial keypoints? Let's take a look at some examples.

Facial keypoints (also called facial landmarks) are the small blue-green dots shown on each of the faces in the image above - there are 15 keypoints marked in each image. They mark important areas of the face - the eyes, corners of the mouth, the nose, etc. Facial keypoints can be used in a variety of machine learning applications from face and emotion recognition to commercial applications like the image filters popularized by Snapchat.

Below we illustrate a filter that, using the results of this section, automatically places sunglasses on people in images (using the facial keypoints to place the glasses correctly on each face). Here, the facial keypoints have been colored lime green for visualization purposes.

Make a facial keypoint detector

But first things first: how can we make a facial keypoint detector? Well, at a high level, notice that facial keypoint detection is a regression problem. A single face corresponds to a set of 15 facial keypoints (a set of 15 corresponding $(x, y)$ coordinates, i.e., an output point). Because our input data are images, we can employ a convolutional neural network to recognize patterns in our images and learn how to identify these keypoint given sets of labeled data.

In order to train a regressor, we need a training set - a set of facial image / facial keypoint pairs to train on. For this we will be using this dataset from Kaggle. We've already downloaded this data and placed it in the data directory. Make sure that you have both the training and test data files. The training dataset contains several thousand $96 \times 96$ grayscale images of cropped human faces, along with each face's 15 corresponding facial keypoints (also called landmarks) that have been placed by hand, and recorded in $(x, y)$ coordinates. This wonderful resource also has a substantial testing set, which we will use in tinkering with our convolutional network.

To load in this data, run the Python cell below - notice we will load in both the training and testing sets.

The load_data function is in the included utils.py file.

In [40]:
from utils import *

# Load training set
X_train, y_train = load_data()
print("X_train.shape == {}".format(X_train.shape))
print("y_train.shape == {}; y_train.min == {:.3f}; y_train.max == {:.3f}".format(
    y_train.shape, y_train.min(), y_train.max()))

# Load testing set
X_test, _ = load_data(test=True)
print("X_test.shape == {}".format(X_test.shape))
Using TensorFlow backend.
X_train.shape == (2140, 96, 96, 1)
y_train.shape == (2140, 30); y_train.min == -0.920; y_train.max == 0.996
X_test.shape == (1783, 96, 96, 1)

The load_data function in utils.py originates from this excellent blog post, which you are strongly encouraged to read. Please take the time now to review this function. Note how the output values - that is, the coordinates of each set of facial landmarks - have been normalized to take on values in the range $[-1, 1]$, while the pixel values of each input point (a facial image) have been normalized to the range $[0,1]$.

Note: the original Kaggle dataset contains some images with several missing keypoints. For simplicity, the load_data function removes those images with missing labels from the dataset. As an optional extension, you are welcome to amend the load_data function to include the incomplete data points.

Visualize the Training Data

Execute the code cell below to visualize a subset of the training data.

In [41]:
import matplotlib.pyplot as plt
%matplotlib inline

fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_train[i], y_train[i], ax)

For each training image, there are two landmarks per eyebrow (four total), three per eye (six total), four for the mouth, and one for the tip of the nose.

Review the plot_data function in utils.py to understand how the 30-dimensional training labels in y_train are mapped to facial locations, as this function will prove useful for your pipeline.

(IMPLEMENTATION) Specify the CNN Architecture

In this section, you will specify a neural network for predicting the locations of facial keypoints. Use the code cell below to specify the architecture of your neural network. We have imported some layers that you may find useful for this task, but if you need to use more Keras layers, feel free to import them in the cell.

Your network should accept a $96 \times 96$ grayscale image as input, and it should output a vector with 30 entries, corresponding to the predicted (horizontal and vertical) locations of 15 facial keypoints. If you are not sure where to start, you can find some useful starting architectures in this blog, but you are not permitted to copy any of the architectures that you find online.

In [18]:
# Import deep learning resources from Keras
from keras.models import Sequential
from keras.layers import Convolution2D,Conv2D, MaxPooling2D, Dropout
from keras.layers import Flatten, Dense


## Specify a CNN architecture
# Your model should accept 96x96 pixel graysale images in
# It should have a fully-connected output layer with 30 values (2 for each facial keypoint)

def get_model():
    model = Sequential()
    model.add(Conv2D(16, (3,3), padding='same', activation='relu', 
                            input_shape=(96, 96, 1)))
    model.add(Conv2D(16,(2,2), activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.35))

    model.add(Conv2D(32, (3,3), padding='same', activation='relu'))
    model.add(Conv2D(32,(2,2), activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.35))

    model.add(Conv2D(64, (3,3), padding='same', activation='relu'))
    model.add(Conv2D(64,(2,2), activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.45))          

    model.add(Conv2D(128, (3,3), padding='same', activation='relu'))
    model.add(Conv2D(128,(2,2), activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.45))           
          
    model.add(Conv2D(256, (3,3), padding='same', activation='relu'))
    model.add(Conv2D(256,(2,2), activation='relu'))
    model.add(MaxPooling2D(pool_size=2))
    model.add(Dropout(0.55))     
          
    model.add(Flatten())
    model.add(Dense(512, activation='relu'))
    model.add(Dropout(0.55))
    model.add(Dense(30))

    # Summarize the model
    model.summary()
    return model

Step 6: Compile and Train the Model

After specifying your architecture, you'll need to compile and train the model to detect facial keypoints'

(IMPLEMENTATION) Compile and Train the Model

Use the compile method to configure the learning process. Experiment with your choice of optimizer; you may have some ideas about which will work best (SGD vs. RMSprop, etc), but take the time to empirically verify your theories.

Use the fit method to train the model. Break off a validation set by setting validation_split=0.2. Save the returned History object in the history variable.

Experiment with your model to minimize the validation loss (measured as mean squared error). A very good model will achieve about 0.0015 loss (though it's possible to do even better). When you have finished training, save your model as an HDF5 file with file path my_model.h5.

In [5]:
from keras.optimizers import SGD, RMSprop, Adagrad, Adadelta, Adam, Adamax, Nadam

batch_size = 32
epochs=100
loss='mse'
metrics=['accuracy','mse']
val_split=0.2
verbose=1
shuffle=True

## Compile the model
#Define an SGD optimizer.
sgd = SGD(lr=0.01, momentum=0.9, decay=1e-6, nesterov=False)
#Define an SDG optimizer with nesterov
sgd_nesterov = SGD(lr=0.01, momentum=0.9, decay=1e-6, nesterov=True)

#Define an RMSProp optimizer
rmsprop = RMSprop(lr=0.001, rho=0.9, decay=1e-6)

#Define an adagrad optimizer
adagrad = Adagrad(lr=0.01, decay=0.0)

#Define an adadelta optimizer
adadelta = Adadelta(lr=1.0, rho=0.95, decay=0.0)

#Define an adam optimizer
adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

#Define an adam optimizer with amsgrad
adam_amsgrad = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, decay=0.0, amsgrad=True)

#Define an adamax optimizer
adamax = Adamax(lr=0.002, beta_1=0.9, beta_2=0.999)

#Define a nadam optimizer
nadam = Nadam(lr=0.002, beta_1=0.9, beta_2=0.999, schedule_decay=0.004)

#Create a dictionary of optimizers
optimizer_dict = {
    'sdg' : sgd,
    'sgd_nesterov' : sgd_nesterov,
    'rmsprop' : rmsprop,
    'adagrad' : adagrad,
    'adadelta': adadelta,
    'adam' : adam,
    'adam_amsgrad': adam_amsgrad,
    'adamax' : adamax,
    'nadam' : nadam
}

#Create a dictionary of a model corresponding to each dictionary.
#This will be inserted after it gets trained, thus allowing future comparison of each
model_dict = {}
history_dict = {}

for optimizer_name in optimizer_dict:
    print("Now using the optimizer: ",optimizer_name)
    model = get_model()
    optimizer = optimizer_dict[optimizer_name]
    model.compile(loss=loss, optimizer=optimizer, metrics=metrics)

    hist = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, 
                 validation_split=val_split, verbose=verbose, shuffle=shuffle)

    ## Save the model as model.h5
    print("Training complete, saving model as: ", optimizer_name + '.h5')
    model.save(optimizer_name + '.h5')
    model_dict[optimizer_name] = model
    history_dict[optimizer_name] = hist
Now using the optimizer:  nadam
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_2 (Dropout)          (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_5 (Conv2D)            (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_3 (Dropout)          (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_7 (Conv2D)            (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_8 (Conv2D)            (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_4 (Dropout)          (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_9 (Conv2D)            (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_10 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_5 (Dropout)          (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 512)               131584    
_________________________________________________________________
dropout_6 (Dropout)          (None, 512)               0         
_________________________________________________________________
dense_2 (Dense)              (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 5s 3ms/step - loss: 0.4718 - acc: 0.3213 - mean_squared_error: 0.4718 - val_loss: 0.0057 - val_acc: 0.6963 - val_mean_squared_error: 0.0057
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0137 - acc: 0.5543 - mean_squared_error: 0.0137 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0106 - acc: 0.6215 - mean_squared_error: 0.0106 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0091 - acc: 0.6612 - mean_squared_error: 0.0091 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0088 - acc: 0.6624 - mean_squared_error: 0.0088 - val_loss: 0.0053 - val_acc: 0.6963 - val_mean_squared_error: 0.0053
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0083 - acc: 0.6741 - mean_squared_error: 0.0083 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0082 - acc: 0.6857 - mean_squared_error: 0.0082 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0081 - acc: 0.6974 - mean_squared_error: 0.0081 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0078 - acc: 0.6904 - mean_squared_error: 0.0078 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0077 - acc: 0.7044 - mean_squared_error: 0.0077 - val_loss: 0.0055 - val_acc: 0.6963 - val_mean_squared_error: 0.0055
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0075 - acc: 0.7062 - mean_squared_error: 0.0075 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0072 - acc: 0.7074 - mean_squared_error: 0.0072 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0073 - acc: 0.7027 - mean_squared_error: 0.0073 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0071 - acc: 0.6998 - mean_squared_error: 0.0071 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0068 - acc: 0.7039 - mean_squared_error: 0.0068 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0069 - acc: 0.7050 - mean_squared_error: 0.0069 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0068 - acc: 0.6968 - mean_squared_error: 0.0068 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.7056 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7027 - mean_squared_error: 0.0065 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7009 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7097 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7044 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7015 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7050 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.7027 - mean_squared_error: 0.0058 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.7056 - mean_squared_error: 0.0057 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.7050 - mean_squared_error: 0.0058 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.7068 - mean_squared_error: 0.0056 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.7068 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.7068 - mean_squared_error: 0.0055 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.7068 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.7068 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.7085 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7074 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7068 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7062 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7074 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7074 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7074 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7068 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7079 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7050 - mean_squared_error: 0.0045 - val_loss: 0.0041 - val_acc: 0.6963 - val_mean_squared_error: 0.0041
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7056 - mean_squared_error: 0.0044 - val_loss: 0.0040 - val_acc: 0.6963 - val_mean_squared_error: 0.0040
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7062 - mean_squared_error: 0.0043 - val_loss: 0.0039 - val_acc: 0.6963 - val_mean_squared_error: 0.0039
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7050 - mean_squared_error: 0.0043 - val_loss: 0.0039 - val_acc: 0.6963 - val_mean_squared_error: 0.0039
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7068 - mean_squared_error: 0.0042 - val_loss: 0.0039 - val_acc: 0.6963 - val_mean_squared_error: 0.0039
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0039 - acc: 0.7079 - mean_squared_error: 0.0039 - val_loss: 0.0035 - val_acc: 0.6963 - val_mean_squared_error: 0.0035
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0038 - acc: 0.7050 - mean_squared_error: 0.0038 - val_loss: 0.0032 - val_acc: 0.6963 - val_mean_squared_error: 0.0032
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0036 - acc: 0.7033 - mean_squared_error: 0.0036 - val_loss: 0.0031 - val_acc: 0.6963 - val_mean_squared_error: 0.0031
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0034 - acc: 0.6986 - mean_squared_error: 0.0034 - val_loss: 0.0030 - val_acc: 0.6963 - val_mean_squared_error: 0.0030
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0033 - acc: 0.7138 - mean_squared_error: 0.0033 - val_loss: 0.0030 - val_acc: 0.7056 - val_mean_squared_error: 0.0030
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0032 - acc: 0.6974 - mean_squared_error: 0.0032 - val_loss: 0.0025 - val_acc: 0.6986 - val_mean_squared_error: 0.0025
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0031 - acc: 0.7056 - mean_squared_error: 0.0031 - val_loss: 0.0025 - val_acc: 0.7196 - val_mean_squared_error: 0.0025
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0029 - acc: 0.7091 - mean_squared_error: 0.0029 - val_loss: 0.0026 - val_acc: 0.6963 - val_mean_squared_error: 0.0026
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0029 - acc: 0.7126 - mean_squared_error: 0.0029 - val_loss: 0.0024 - val_acc: 0.7056 - val_mean_squared_error: 0.0024
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0028 - acc: 0.7132 - mean_squared_error: 0.0028 - val_loss: 0.0024 - val_acc: 0.7009 - val_mean_squared_error: 0.0024
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0028 - acc: 0.7114 - mean_squared_error: 0.0028 - val_loss: 0.0022 - val_acc: 0.7009 - val_mean_squared_error: 0.0022
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0027 - acc: 0.7155 - mean_squared_error: 0.0027 - val_loss: 0.0022 - val_acc: 0.7009 - val_mean_squared_error: 0.0022
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0026 - acc: 0.7103 - mean_squared_error: 0.0026 - val_loss: 0.0021 - val_acc: 0.6986 - val_mean_squared_error: 0.0021
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0026 - acc: 0.7173 - mean_squared_error: 0.0026 - val_loss: 0.0021 - val_acc: 0.7009 - val_mean_squared_error: 0.0021
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0025 - acc: 0.7120 - mean_squared_error: 0.0025 - val_loss: 0.0021 - val_acc: 0.6963 - val_mean_squared_error: 0.0021
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0025 - acc: 0.7261 - mean_squared_error: 0.0025 - val_loss: 0.0022 - val_acc: 0.6986 - val_mean_squared_error: 0.0022
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0025 - acc: 0.7150 - mean_squared_error: 0.0025 - val_loss: 0.0021 - val_acc: 0.7079 - val_mean_squared_error: 0.0021
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7138 - mean_squared_error: 0.0024 - val_loss: 0.0019 - val_acc: 0.7150 - val_mean_squared_error: 0.0019
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7196 - mean_squared_error: 0.0024 - val_loss: 0.0023 - val_acc: 0.6986 - val_mean_squared_error: 0.0023
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7307 - mean_squared_error: 0.0024 - val_loss: 0.0020 - val_acc: 0.7103 - val_mean_squared_error: 0.0020
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0023 - acc: 0.7202 - mean_squared_error: 0.0023 - val_loss: 0.0019 - val_acc: 0.7173 - val_mean_squared_error: 0.0019
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0023 - acc: 0.7237 - mean_squared_error: 0.0023 - val_loss: 0.0018 - val_acc: 0.7126 - val_mean_squared_error: 0.0018
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0023 - acc: 0.7237 - mean_squared_error: 0.0023 - val_loss: 0.0018 - val_acc: 0.7126 - val_mean_squared_error: 0.0018
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7202 - mean_squared_error: 0.0022 - val_loss: 0.0018 - val_acc: 0.7079 - val_mean_squared_error: 0.0018
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7313 - mean_squared_error: 0.0022 - val_loss: 0.0020 - val_acc: 0.7056 - val_mean_squared_error: 0.0020
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7261 - mean_squared_error: 0.0022 - val_loss: 0.0017 - val_acc: 0.7103 - val_mean_squared_error: 0.0017
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7196 - mean_squared_error: 0.0021 - val_loss: 0.0018 - val_acc: 0.7056 - val_mean_squared_error: 0.0018
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7231 - mean_squared_error: 0.0021 - val_loss: 0.0017 - val_acc: 0.7150 - val_mean_squared_error: 0.0017
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7319 - mean_squared_error: 0.0021 - val_loss: 0.0017 - val_acc: 0.7150 - val_mean_squared_error: 0.0017
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7173 - mean_squared_error: 0.0021 - val_loss: 0.0017 - val_acc: 0.7126 - val_mean_squared_error: 0.0017
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7261 - mean_squared_error: 0.0021 - val_loss: 0.0016 - val_acc: 0.7313 - val_mean_squared_error: 0.0016
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7290 - mean_squared_error: 0.0020 - val_loss: 0.0016 - val_acc: 0.7243 - val_mean_squared_error: 0.0016
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7196 - mean_squared_error: 0.0020 - val_loss: 0.0016 - val_acc: 0.7196 - val_mean_squared_error: 0.0016
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7284 - mean_squared_error: 0.0020 - val_loss: 0.0016 - val_acc: 0.7126 - val_mean_squared_error: 0.0016
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7237 - mean_squared_error: 0.0020 - val_loss: 0.0016 - val_acc: 0.7220 - val_mean_squared_error: 0.0016
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7354 - mean_squared_error: 0.0020 - val_loss: 0.0016 - val_acc: 0.7243 - val_mean_squared_error: 0.0016
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7185 - mean_squared_error: 0.0020 - val_loss: 0.0017 - val_acc: 0.7173 - val_mean_squared_error: 0.0017
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7167 - mean_squared_error: 0.0020 - val_loss: 0.0017 - val_acc: 0.7079 - val_mean_squared_error: 0.0017
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7225 - mean_squared_error: 0.0020 - val_loss: 0.0016 - val_acc: 0.7266 - val_mean_squared_error: 0.0016
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7307 - mean_squared_error: 0.0019 - val_loss: 0.0016 - val_acc: 0.7173 - val_mean_squared_error: 0.0016
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7214 - mean_squared_error: 0.0019 - val_loss: 0.0015 - val_acc: 0.7266 - val_mean_squared_error: 0.0015
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7307 - mean_squared_error: 0.0019 - val_loss: 0.0015 - val_acc: 0.7173 - val_mean_squared_error: 0.0015
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7249 - mean_squared_error: 0.0019 - val_loss: 0.0016 - val_acc: 0.7313 - val_mean_squared_error: 0.0016
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7261 - mean_squared_error: 0.0019 - val_loss: 0.0015 - val_acc: 0.7266 - val_mean_squared_error: 0.0015
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7266 - mean_squared_error: 0.0019 - val_loss: 0.0015 - val_acc: 0.7266 - val_mean_squared_error: 0.0015
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7179 - mean_squared_error: 0.0019 - val_loss: 0.0014 - val_acc: 0.7243 - val_mean_squared_error: 0.0014
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0018 - acc: 0.7261 - mean_squared_error: 0.0018 - val_loss: 0.0016 - val_acc: 0.7360 - val_mean_squared_error: 0.0016
Training complete, saving model as:  nadam.h5
Now using the optimizer:  adam
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_11 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_12 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_7 (Dropout)          (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_13 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_14 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_8 (Dropout)          (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_15 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_16 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_8 (MaxPooling2 (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_9 (Dropout)          (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_17 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_18 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_10 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_19 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_20 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_11 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_2 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_3 (Dense)              (None, 512)               131584    
_________________________________________________________________
dropout_12 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_4 (Dense)              (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 0.0653 - acc: 0.2850 - mean_squared_error: 0.0653 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0100 - acc: 0.5275 - mean_squared_error: 0.0100 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0081 - acc: 0.5789 - mean_squared_error: 0.0081 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0072 - acc: 0.6186 - mean_squared_error: 0.0072 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6396 - mean_squared_error: 0.0066 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.6525 - mean_squared_error: 0.0063 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6618 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.6589 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6863 - mean_squared_error: 0.0058 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6811 - mean_squared_error: 0.0057 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6887 - mean_squared_error: 0.0058 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6910 - mean_squared_error: 0.0057 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6910 - mean_squared_error: 0.0057 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6951 - mean_squared_error: 0.0055 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6957 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.7004 - mean_squared_error: 0.0055 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.7004 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6963 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.7009 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7050 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7021 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7027 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7091 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7039 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7044 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7068 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7027 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7044 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7068 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7068 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7062 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7074 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7056 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Training complete, saving model as:  adam.h5
Now using the optimizer:  adamax
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_21 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_22 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_13 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_23 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_24 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_12 (MaxPooling (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_14 (Dropout)         (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_25 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_26 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_13 (MaxPooling (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_15 (Dropout)         (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_27 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_28 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_14 (MaxPooling (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_16 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_29 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_30 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_15 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_17 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_3 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_5 (Dense)              (None, 512)               131584    
_________________________________________________________________
dropout_18 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_6 (Dense)              (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 0.0271 - acc: 0.4171 - mean_squared_error: 0.0271 - val_loss: 0.0072 - val_acc: 0.6963 - val_mean_squared_error: 0.0072
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0097 - acc: 0.5526 - mean_squared_error: 0.0097 - val_loss: 0.0065 - val_acc: 0.6963 - val_mean_squared_error: 0.0065
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0083 - acc: 0.5894 - mean_squared_error: 0.0083 - val_loss: 0.0054 - val_acc: 0.6963 - val_mean_squared_error: 0.0054
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0074 - acc: 0.6209 - mean_squared_error: 0.0074 - val_loss: 0.0053 - val_acc: 0.6963 - val_mean_squared_error: 0.0053
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0071 - acc: 0.6361 - mean_squared_error: 0.0071 - val_loss: 0.0053 - val_acc: 0.6963 - val_mean_squared_error: 0.0053
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6443 - mean_squared_error: 0.0066 - val_loss: 0.0054 - val_acc: 0.6963 - val_mean_squared_error: 0.0054
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.6460 - mean_squared_error: 0.0065 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.6665 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6752 - mean_squared_error: 0.0060 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6711 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6828 - mean_squared_error: 0.0058 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6840 - mean_squared_error: 0.0059 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6863 - mean_squared_error: 0.0058 - val_loss: 0.0049 - val_acc: 0.6963 - val_mean_squared_error: 0.0049
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6887 - mean_squared_error: 0.0057 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6898 - mean_squared_error: 0.0056 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6898 - mean_squared_error: 0.0056 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6939 - mean_squared_error: 0.0056 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.7009 - mean_squared_error: 0.0055 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6974 - mean_squared_error: 0.0055 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6974 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6992 - mean_squared_error: 0.0054 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.7039 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.7039 - mean_squared_error: 0.0054 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6986 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.7039 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7044 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7021 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7044 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7062 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7027 - mean_squared_error: 0.0051 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7044 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7027 - mean_squared_error: 0.0049 - val_loss: 0.0041 - val_acc: 0.6963 - val_mean_squared_error: 0.0041
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7027 - mean_squared_error: 0.0048 - val_loss: 0.0038 - val_acc: 0.6963 - val_mean_squared_error: 0.0038
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7033 - mean_squared_error: 0.0044 - val_loss: 0.0035 - val_acc: 0.6963 - val_mean_squared_error: 0.0035
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.6992 - mean_squared_error: 0.0042 - val_loss: 0.0031 - val_acc: 0.6963 - val_mean_squared_error: 0.0031
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0041 - acc: 0.7079 - mean_squared_error: 0.0041 - val_loss: 0.0030 - val_acc: 0.6963 - val_mean_squared_error: 0.0030
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0038 - acc: 0.7004 - mean_squared_error: 0.0038 - val_loss: 0.0027 - val_acc: 0.6939 - val_mean_squared_error: 0.0027
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0035 - acc: 0.7068 - mean_squared_error: 0.0035 - val_loss: 0.0027 - val_acc: 0.6963 - val_mean_squared_error: 0.0027
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0035 - acc: 0.7074 - mean_squared_error: 0.0035 - val_loss: 0.0024 - val_acc: 0.7150 - val_mean_squared_error: 0.0024
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0034 - acc: 0.7009 - mean_squared_error: 0.0034 - val_loss: 0.0024 - val_acc: 0.6986 - val_mean_squared_error: 0.0024
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0032 - acc: 0.7085 - mean_squared_error: 0.0032 - val_loss: 0.0022 - val_acc: 0.7033 - val_mean_squared_error: 0.0022
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0031 - acc: 0.7114 - mean_squared_error: 0.0031 - val_loss: 0.0021 - val_acc: 0.7126 - val_mean_squared_error: 0.0021
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0030 - acc: 0.7161 - mean_squared_error: 0.0030 - val_loss: 0.0020 - val_acc: 0.7150 - val_mean_squared_error: 0.0020
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0029 - acc: 0.7144 - mean_squared_error: 0.0029 - val_loss: 0.0020 - val_acc: 0.7407 - val_mean_squared_error: 0.0020
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0028 - acc: 0.7220 - mean_squared_error: 0.0028 - val_loss: 0.0018 - val_acc: 0.7336 - val_mean_squared_error: 0.0018
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0027 - acc: 0.7272 - mean_squared_error: 0.0027 - val_loss: 0.0020 - val_acc: 0.7266 - val_mean_squared_error: 0.0020
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0026 - acc: 0.7266 - mean_squared_error: 0.0026 - val_loss: 0.0018 - val_acc: 0.7220 - val_mean_squared_error: 0.0018
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0025 - acc: 0.7202 - mean_squared_error: 0.0025 - val_loss: 0.0017 - val_acc: 0.7173 - val_mean_squared_error: 0.0017
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0025 - acc: 0.7284 - mean_squared_error: 0.0025 - val_loss: 0.0017 - val_acc: 0.7150 - val_mean_squared_error: 0.0017
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7313 - mean_squared_error: 0.0024 - val_loss: 0.0017 - val_acc: 0.7243 - val_mean_squared_error: 0.0017
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7190 - mean_squared_error: 0.0024 - val_loss: 0.0016 - val_acc: 0.7290 - val_mean_squared_error: 0.0016
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0023 - acc: 0.7284 - mean_squared_error: 0.0023 - val_loss: 0.0015 - val_acc: 0.7266 - val_mean_squared_error: 0.0015
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7167 - mean_squared_error: 0.0022 - val_loss: 0.0016 - val_acc: 0.7360 - val_mean_squared_error: 0.0016
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7266 - mean_squared_error: 0.0022 - val_loss: 0.0016 - val_acc: 0.7383 - val_mean_squared_error: 0.0016
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7383 - mean_squared_error: 0.0021 - val_loss: 0.0015 - val_acc: 0.7407 - val_mean_squared_error: 0.0015
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7325 - mean_squared_error: 0.0021 - val_loss: 0.0014 - val_acc: 0.7173 - val_mean_squared_error: 0.0014
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7389 - mean_squared_error: 0.0021 - val_loss: 0.0014 - val_acc: 0.7290 - val_mean_squared_error: 0.0014
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7331 - mean_squared_error: 0.0020 - val_loss: 0.0014 - val_acc: 0.7220 - val_mean_squared_error: 0.0014
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7261 - mean_squared_error: 0.0020 - val_loss: 0.0014 - val_acc: 0.7407 - val_mean_squared_error: 0.0014
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7366 - mean_squared_error: 0.0019 - val_loss: 0.0014 - val_acc: 0.7407 - val_mean_squared_error: 0.0014
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7407 - mean_squared_error: 0.0019 - val_loss: 0.0013 - val_acc: 0.7407 - val_mean_squared_error: 0.0013
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7284 - mean_squared_error: 0.0019 - val_loss: 0.0015 - val_acc: 0.7336 - val_mean_squared_error: 0.0015
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0018 - acc: 0.7465 - mean_squared_error: 0.0018 - val_loss: 0.0012 - val_acc: 0.7547 - val_mean_squared_error: 0.0012
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0018 - acc: 0.7453 - mean_squared_error: 0.0018 - val_loss: 0.0012 - val_acc: 0.7290 - val_mean_squared_error: 0.0012
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7488 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7336 - val_mean_squared_error: 0.0013
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7453 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7243 - val_mean_squared_error: 0.0013
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7412 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7336 - val_mean_squared_error: 0.0013
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7500 - mean_squared_error: 0.0017 - val_loss: 0.0012 - val_acc: 0.7453 - val_mean_squared_error: 0.0012
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7512 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7523 - val_mean_squared_error: 0.0013
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0016 - acc: 0.7535 - mean_squared_error: 0.0016 - val_loss: 0.0012 - val_acc: 0.7313 - val_mean_squared_error: 0.0012
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0016 - acc: 0.7430 - mean_squared_error: 0.0016 - val_loss: 0.0012 - val_acc: 0.7430 - val_mean_squared_error: 0.0012
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7699 - mean_squared_error: 0.0015 - val_loss: 0.0011 - val_acc: 0.7500 - val_mean_squared_error: 0.0011
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7599 - mean_squared_error: 0.0015 - val_loss: 0.0011 - val_acc: 0.7477 - val_mean_squared_error: 0.0011
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7523 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7523 - val_mean_squared_error: 0.0012
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7564 - mean_squared_error: 0.0015 - val_loss: 0.0011 - val_acc: 0.7453 - val_mean_squared_error: 0.0011
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7558 - mean_squared_error: 0.0015 - val_loss: 0.0011 - val_acc: 0.7617 - val_mean_squared_error: 0.0011
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7535 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7570 - val_mean_squared_error: 0.0011
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7582 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7500 - val_mean_squared_error: 0.0011
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7704 - mean_squared_error: 0.0014 - val_loss: 0.0012 - val_acc: 0.7617 - val_mean_squared_error: 0.0012
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7605 - mean_squared_error: 0.0014 - val_loss: 0.0010 - val_acc: 0.7593 - val_mean_squared_error: 0.0010
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7681 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7687 - val_mean_squared_error: 0.0011
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7728 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7547 - val_mean_squared_error: 0.0011
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7704 - mean_squared_error: 0.0013 - val_loss: 0.0010 - val_acc: 0.7850 - val_mean_squared_error: 0.0010
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7675 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.8061 - val_mean_squared_error: 0.0011
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7669 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7874 - val_mean_squared_error: 0.0011
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7658 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7664 - val_mean_squared_error: 0.0011
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7780 - mean_squared_error: 0.0013 - val_loss: 0.0010 - val_acc: 0.7734 - val_mean_squared_error: 0.0010
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7699 - mean_squared_error: 0.0012 - val_loss: 0.0011 - val_acc: 0.7897 - val_mean_squared_error: 0.0011
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7722 - mean_squared_error: 0.0013 - val_loss: 9.7347e-04 - val_acc: 0.7850 - val_mean_squared_error: 9.7347e-04
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7763 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.8061 - val_mean_squared_error: 0.0010
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7856 - mean_squared_error: 0.0012 - val_loss: 0.0011 - val_acc: 0.8014 - val_mean_squared_error: 0.0011
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7991 - mean_squared_error: 0.0012 - val_loss: 0.0011 - val_acc: 0.8014 - val_mean_squared_error: 0.0011
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7798 - mean_squared_error: 0.0012 - val_loss: 9.7197e-04 - val_acc: 0.7710 - val_mean_squared_error: 9.7197e-04
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7757 - mean_squared_error: 0.0012 - val_loss: 9.2048e-04 - val_acc: 0.8037 - val_mean_squared_error: 9.2048e-04
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7681 - mean_squared_error: 0.0012 - val_loss: 9.2315e-04 - val_acc: 0.7897 - val_mean_squared_error: 9.2315e-04
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7921 - mean_squared_error: 0.0012 - val_loss: 9.3344e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.3344e-04
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7880 - mean_squared_error: 0.0012 - val_loss: 9.9460e-04 - val_acc: 0.8014 - val_mean_squared_error: 9.9460e-04
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7775 - mean_squared_error: 0.0011 - val_loss: 0.0010 - val_acc: 0.7921 - val_mean_squared_error: 0.0010
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7880 - mean_squared_error: 0.0011 - val_loss: 9.5054e-04 - val_acc: 0.8201 - val_mean_squared_error: 9.5054e-04
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7897 - mean_squared_error: 0.0011 - val_loss: 9.4231e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.4231e-04
Training complete, saving model as:  adamax.h5
Now using the optimizer:  sdg
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_31 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_32 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_16 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_19 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_33 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_34 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_17 (MaxPooling (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_20 (Dropout)         (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_35 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_36 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_18 (MaxPooling (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_21 (Dropout)         (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_37 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_38 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_19 (MaxPooling (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_22 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_39 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_40 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_20 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_23 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_7 (Dense)              (None, 512)               131584    
_________________________________________________________________
dropout_24 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_8 (Dense)              (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 0.0959 - acc: 0.1606 - mean_squared_error: 0.0959 - val_loss: 0.0312 - val_acc: 0.6822 - val_mean_squared_error: 0.0312
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0193 - acc: 0.4019 - mean_squared_error: 0.0193 - val_loss: 0.0116 - val_acc: 0.6963 - val_mean_squared_error: 0.0116
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0124 - acc: 0.4731 - mean_squared_error: 0.0124 - val_loss: 0.0088 - val_acc: 0.6963 - val_mean_squared_error: 0.0088
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0107 - acc: 0.5596 - mean_squared_error: 0.0107 - val_loss: 0.0075 - val_acc: 0.6963 - val_mean_squared_error: 0.0075
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0095 - acc: 0.5631 - mean_squared_error: 0.0095 - val_loss: 0.0066 - val_acc: 0.6963 - val_mean_squared_error: 0.0066
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0089 - acc: 0.5678 - mean_squared_error: 0.0089 - val_loss: 0.0060 - val_acc: 0.6963 - val_mean_squared_error: 0.0060
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0083 - acc: 0.5935 - mean_squared_error: 0.0083 - val_loss: 0.0054 - val_acc: 0.6963 - val_mean_squared_error: 0.0054
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0079 - acc: 0.6133 - mean_squared_error: 0.0079 - val_loss: 0.0053 - val_acc: 0.6963 - val_mean_squared_error: 0.0053
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0076 - acc: 0.6069 - mean_squared_error: 0.0076 - val_loss: 0.0051 - val_acc: 0.6963 - val_mean_squared_error: 0.0051
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0074 - acc: 0.6168 - mean_squared_error: 0.0074 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0071 - acc: 0.6308 - mean_squared_error: 0.0071 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0070 - acc: 0.6402 - mean_squared_error: 0.0070 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0068 - acc: 0.6127 - mean_squared_error: 0.0068 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6443 - mean_squared_error: 0.0066 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.6349 - mean_squared_error: 0.0065 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.6554 - mean_squared_error: 0.0064 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.6554 - mean_squared_error: 0.0063 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6565 - mean_squared_error: 0.0062 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.6589 - mean_squared_error: 0.0061 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6618 - mean_squared_error: 0.0060 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6741 - mean_squared_error: 0.0059 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6717 - mean_squared_error: 0.0059 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6676 - mean_squared_error: 0.0058 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6636 - mean_squared_error: 0.0057 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6711 - mean_squared_error: 0.0057 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6723 - mean_squared_error: 0.0056 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6834 - mean_squared_error: 0.0056 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6852 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6875 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6822 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6805 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6828 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6916 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6893 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6881 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6916 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6945 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6939 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6951 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6922 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.6933 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.6974 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.6980 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.6992 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.6974 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7009 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7027 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7021 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.6968 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7009 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.6992 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7027 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7044 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7033 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7033 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7062 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.6998 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7062 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7044 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7079 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7050 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7027 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7062 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7050 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7068 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7079 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7062 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7050 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7056 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7050 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7079 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7062 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Training complete, saving model as:  sdg.h5
Now using the optimizer:  adam_amsgrad
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_41 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_42 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_21 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_25 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_43 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_44 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_22 (MaxPooling (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_26 (Dropout)         (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_45 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_46 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_23 (MaxPooling (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_27 (Dropout)         (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_47 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_48 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_24 (MaxPooling (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_28 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_49 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_50 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_25 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_29 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_5 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_9 (Dense)              (None, 512)               131584    
_________________________________________________________________
dropout_30 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_10 (Dense)             (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 0.0478 - acc: 0.3750 - mean_squared_error: 0.0478 - val_loss: 0.0068 - val_acc: 0.6963 - val_mean_squared_error: 0.0068
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0097 - acc: 0.5485 - mean_squared_error: 0.0097 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0080 - acc: 0.5748 - mean_squared_error: 0.0080 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0071 - acc: 0.6197 - mean_squared_error: 0.0071 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0067 - acc: 0.6227 - mean_squared_error: 0.0067 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.6454 - mean_squared_error: 0.0065 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6525 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6647 - mean_squared_error: 0.0059 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6805 - mean_squared_error: 0.0060 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6746 - mean_squared_error: 0.0057 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6817 - mean_squared_error: 0.0056 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6811 - mean_squared_error: 0.0056 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6922 - mean_squared_error: 0.0057 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6898 - mean_squared_error: 0.0055 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6922 - mean_squared_error: 0.0055 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6910 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6957 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6998 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6974 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7044 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7015 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6963 - mean_squared_error: 0.0053 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7021 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7033 - mean_squared_error: 0.0052 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7050 - mean_squared_error: 0.0052 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7027 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7044 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7056 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7068 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7062 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7074 - mean_squared_error: 0.0051 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7068 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7079 - mean_squared_error: 0.0050 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7062 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7068 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7068 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7068 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7068 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7079 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Training complete, saving model as:  adam_amsgrad.h5
Now using the optimizer:  rmsprop
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_51 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_52 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_26 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_31 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_53 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_54 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_27 (MaxPooling (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_32 (Dropout)         (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_55 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_56 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_28 (MaxPooling (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_33 (Dropout)         (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_57 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_58 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_29 (MaxPooling (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_34 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_59 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_60 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_30 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_35 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_6 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_11 (Dense)             (None, 512)               131584    
_________________________________________________________________
dropout_36 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_12 (Dense)             (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 0.1011 - acc: 0.3359 - mean_squared_error: 0.1011 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0128 - acc: 0.5315 - mean_squared_error: 0.0128 - val_loss: 0.0097 - val_acc: 0.6963 - val_mean_squared_error: 0.0097
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0081 - acc: 0.6063 - mean_squared_error: 0.0081 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6460 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6676 - mean_squared_error: 0.0060 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6811 - mean_squared_error: 0.0056 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6904 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6939 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7027 - mean_squared_error: 0.0053 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7009 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7068 - mean_squared_error: 0.0050 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7074 - mean_squared_error: 0.0050 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7068 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7062 - mean_squared_error: 0.0050 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7062 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7074 - mean_squared_error: 0.0049 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0049 - val_acc: 0.6963 - val_mean_squared_error: 0.0049
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Training complete, saving model as:  rmsprop.h5
Now using the optimizer:  adagrad
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_61 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_62 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_31 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_37 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_63 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_64 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_32 (MaxPooling (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_38 (Dropout)         (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_65 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_66 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_33 (MaxPooling (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_39 (Dropout)         (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_67 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_68 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_34 (MaxPooling (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_40 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_69 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_70 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_35 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_41 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_7 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_13 (Dense)             (None, 512)               131584    
_________________________________________________________________
dropout_42 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_14 (Dense)             (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 3.5326 - acc: 0.3808 - mean_squared_error: 3.5326 - val_loss: 0.0103 - val_acc: 0.6963 - val_mean_squared_error: 0.0103
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0116 - acc: 0.5654 - mean_squared_error: 0.0116 - val_loss: 0.0082 - val_acc: 0.6963 - val_mean_squared_error: 0.0082
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0093 - acc: 0.5900 - mean_squared_error: 0.0093 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0085 - acc: 0.6454 - mean_squared_error: 0.0085 - val_loss: 0.0055 - val_acc: 0.6963 - val_mean_squared_error: 0.0055
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0080 - acc: 0.6390 - mean_squared_error: 0.0080 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0078 - acc: 0.6437 - mean_squared_error: 0.0078 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0074 - acc: 0.6589 - mean_squared_error: 0.0074 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0073 - acc: 0.6618 - mean_squared_error: 0.0073 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0072 - acc: 0.6717 - mean_squared_error: 0.0072 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0072 - acc: 0.6752 - mean_squared_error: 0.0072 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0071 - acc: 0.6822 - mean_squared_error: 0.0071 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0071 - acc: 0.6852 - mean_squared_error: 0.0071 - val_loss: 0.0052 - val_acc: 0.6963 - val_mean_squared_error: 0.0052
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0069 - acc: 0.6928 - mean_squared_error: 0.0069 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6945 - mean_squared_error: 0.0066 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0068 - acc: 0.6898 - mean_squared_error: 0.0068 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6963 - mean_squared_error: 0.0066 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6986 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0067 - acc: 0.6986 - mean_squared_error: 0.0067 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6992 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0067 - acc: 0.6957 - mean_squared_error: 0.0067 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.7033 - mean_squared_error: 0.0066 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.7009 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.7015 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.7021 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0067 - acc: 0.7015 - mean_squared_error: 0.0067 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6998 - mean_squared_error: 0.0066 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7027 - mean_squared_error: 0.0065 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7009 - mean_squared_error: 0.0065 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7056 - mean_squared_error: 0.0065 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.7068 - mean_squared_error: 0.0064 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7074 - mean_squared_error: 0.0065 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7050 - mean_squared_error: 0.0065 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.7044 - mean_squared_error: 0.0064 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7074 - mean_squared_error: 0.0065 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.7044 - mean_squared_error: 0.0064 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.7068 - mean_squared_error: 0.0064 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.7062 - mean_squared_error: 0.0065 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7068 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7074 - mean_squared_error: 0.0063 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7068 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7068 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7062 - mean_squared_error: 0.0063 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7050 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.7068 - mean_squared_error: 0.0064 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7068 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7074 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7068 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7062 - mean_squared_error: 0.0062 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7068 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7056 - mean_squared_error: 0.0063 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7056 - mean_squared_error: 0.0061 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7074 - mean_squared_error: 0.0063 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7079 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7068 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7074 - mean_squared_error: 0.0063 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7068 - mean_squared_error: 0.0062 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7068 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7074 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7068 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7074 - mean_squared_error: 0.0063 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7074 - mean_squared_error: 0.0060 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7074 - mean_squared_error: 0.0060 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.7074 - mean_squared_error: 0.0063 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7074 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7068 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7074 - mean_squared_error: 0.0060 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.7074 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.7074 - mean_squared_error: 0.0061 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.7074 - mean_squared_error: 0.0059 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.7074 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Training complete, saving model as:  adagrad.h5
Now using the optimizer:  adadelta
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_71 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_72 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_36 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_43 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_73 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_74 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_37 (MaxPooling (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_44 (Dropout)         (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_75 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_76 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_38 (MaxPooling (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_45 (Dropout)         (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_77 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_78 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_39 (MaxPooling (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_46 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_79 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_80 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_40 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_47 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_8 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_15 (Dense)             (None, 512)               131584    
_________________________________________________________________
dropout_48 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_16 (Dense)             (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 0.0568 - acc: 0.4013 - mean_squared_error: 0.0568 - val_loss: 0.0122 - val_acc: 0.6963 - val_mean_squared_error: 0.0122
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0130 - acc: 0.4924 - mean_squared_error: 0.0130 - val_loss: 0.0078 - val_acc: 0.6963 - val_mean_squared_error: 0.0078
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0102 - acc: 0.5368 - mean_squared_error: 0.0102 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0086 - acc: 0.5864 - mean_squared_error: 0.0086 - val_loss: 0.0060 - val_acc: 0.6963 - val_mean_squared_error: 0.0060
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0078 - acc: 0.6256 - mean_squared_error: 0.0078 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0072 - acc: 0.6373 - mean_squared_error: 0.0072 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0068 - acc: 0.6525 - mean_squared_error: 0.0068 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.6583 - mean_squared_error: 0.0064 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6612 - mean_squared_error: 0.0062 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6735 - mean_squared_error: 0.0060 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6741 - mean_squared_error: 0.0058 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6811 - mean_squared_error: 0.0057 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6811 - mean_squared_error: 0.0055 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6933 - mean_squared_error: 0.0054 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6945 - mean_squared_error: 0.0053 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7015 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7004 - mean_squared_error: 0.0052 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7044 - mean_squared_error: 0.0051 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7021 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7039 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7062 - mean_squared_error: 0.0050 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7062 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7062 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7050 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7074 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7079 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7074 - mean_squared_error: 0.0045 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7074 - mean_squared_error: 0.0043 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7068 - mean_squared_error: 0.0042 - val_loss: 0.0041 - val_acc: 0.6963 - val_mean_squared_error: 0.0041
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7068 - mean_squared_error: 0.0042 - val_loss: 0.0041 - val_acc: 0.6963 - val_mean_squared_error: 0.0041
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7074 - mean_squared_error: 0.0042 - val_loss: 0.0041 - val_acc: 0.6963 - val_mean_squared_error: 0.0041
Training complete, saving model as:  adadelta.h5
Now using the optimizer:  sgd_nesterov
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_81 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_82 (Conv2D)           (None, 94, 94, 16)        2320      
_________________________________________________________________
max_pooling2d_41 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_49 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_83 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_84 (Conv2D)           (None, 45, 45, 32)        9248      
_________________________________________________________________
max_pooling2d_42 (MaxPooling (None, 22, 22, 32)        0         
_________________________________________________________________
dropout_50 (Dropout)         (None, 22, 22, 32)        0         
_________________________________________________________________
conv2d_85 (Conv2D)           (None, 22, 22, 64)        18496     
_________________________________________________________________
conv2d_86 (Conv2D)           (None, 20, 20, 64)        36928     
_________________________________________________________________
max_pooling2d_43 (MaxPooling (None, 10, 10, 64)        0         
_________________________________________________________________
dropout_51 (Dropout)         (None, 10, 10, 64)        0         
_________________________________________________________________
conv2d_87 (Conv2D)           (None, 10, 10, 128)       73856     
_________________________________________________________________
conv2d_88 (Conv2D)           (None, 8, 8, 128)         147584    
_________________________________________________________________
max_pooling2d_44 (MaxPooling (None, 4, 4, 128)         0         
_________________________________________________________________
dropout_52 (Dropout)         (None, 4, 4, 128)         0         
_________________________________________________________________
conv2d_89 (Conv2D)           (None, 4, 4, 256)         295168    
_________________________________________________________________
conv2d_90 (Conv2D)           (None, 2, 2, 256)         590080    
_________________________________________________________________
max_pooling2d_45 (MaxPooling (None, 1, 1, 256)         0         
_________________________________________________________________
dropout_53 (Dropout)         (None, 1, 1, 256)         0         
_________________________________________________________________
flatten_9 (Flatten)          (None, 256)               0         
_________________________________________________________________
dense_17 (Dense)             (None, 512)               131584    
_________________________________________________________________
dropout_54 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_18 (Dense)             (None, 30)                15390     
=================================================================
Total params: 1,325,454
Trainable params: 1,325,454
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/100
1712/1712 [==============================] - 4s 2ms/step - loss: 0.0737 - acc: 0.2769 - mean_squared_error: 0.0737 - val_loss: 0.0336 - val_acc: 0.6963 - val_mean_squared_error: 0.0336
Epoch 2/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0198 - acc: 0.4206 - mean_squared_error: 0.0198 - val_loss: 0.0231 - val_acc: 0.6963 - val_mean_squared_error: 0.0231
Epoch 3/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0146 - acc: 0.4720 - mean_squared_error: 0.0146 - val_loss: 0.0162 - val_acc: 0.6963 - val_mean_squared_error: 0.0162
Epoch 4/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0123 - acc: 0.5000 - mean_squared_error: 0.0123 - val_loss: 0.0122 - val_acc: 0.6963 - val_mean_squared_error: 0.0122
Epoch 5/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0110 - acc: 0.5444 - mean_squared_error: 0.0110 - val_loss: 0.0103 - val_acc: 0.6963 - val_mean_squared_error: 0.0103
Epoch 6/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0099 - acc: 0.5391 - mean_squared_error: 0.0099 - val_loss: 0.0091 - val_acc: 0.6963 - val_mean_squared_error: 0.0091
Epoch 7/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0092 - acc: 0.5666 - mean_squared_error: 0.0092 - val_loss: 0.0080 - val_acc: 0.6963 - val_mean_squared_error: 0.0080
Epoch 8/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0088 - acc: 0.5824 - mean_squared_error: 0.0088 - val_loss: 0.0074 - val_acc: 0.6963 - val_mean_squared_error: 0.0074
Epoch 9/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0083 - acc: 0.5958 - mean_squared_error: 0.0083 - val_loss: 0.0069 - val_acc: 0.6963 - val_mean_squared_error: 0.0069
Epoch 10/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0080 - acc: 0.5923 - mean_squared_error: 0.0080 - val_loss: 0.0066 - val_acc: 0.6963 - val_mean_squared_error: 0.0066
Epoch 11/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0076 - acc: 0.6157 - mean_squared_error: 0.0076 - val_loss: 0.0063 - val_acc: 0.6963 - val_mean_squared_error: 0.0063
Epoch 12/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0074 - acc: 0.6215 - mean_squared_error: 0.0074 - val_loss: 0.0059 - val_acc: 0.6963 - val_mean_squared_error: 0.0059
Epoch 13/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0072 - acc: 0.6221 - mean_squared_error: 0.0072 - val_loss: 0.0058 - val_acc: 0.6963 - val_mean_squared_error: 0.0058
Epoch 14/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0070 - acc: 0.6262 - mean_squared_error: 0.0070 - val_loss: 0.0057 - val_acc: 0.6963 - val_mean_squared_error: 0.0057
Epoch 15/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0069 - acc: 0.6402 - mean_squared_error: 0.0069 - val_loss: 0.0054 - val_acc: 0.6963 - val_mean_squared_error: 0.0054
Epoch 16/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0067 - acc: 0.6507 - mean_squared_error: 0.0067 - val_loss: 0.0053 - val_acc: 0.6963 - val_mean_squared_error: 0.0053
Epoch 17/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.6454 - mean_squared_error: 0.0065 - val_loss: 0.0052 - val_acc: 0.6963 - val_mean_squared_error: 0.0052
Epoch 18/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0064 - acc: 0.6437 - mean_squared_error: 0.0064 - val_loss: 0.0051 - val_acc: 0.6963 - val_mean_squared_error: 0.0051
Epoch 19/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.6525 - mean_squared_error: 0.0063 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 20/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6484 - mean_squared_error: 0.0062 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 21/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6659 - mean_squared_error: 0.0062 - val_loss: 0.0049 - val_acc: 0.6963 - val_mean_squared_error: 0.0049
Epoch 22/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.6565 - mean_squared_error: 0.0061 - val_loss: 0.0049 - val_acc: 0.6963 - val_mean_squared_error: 0.0049
Epoch 23/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6717 - mean_squared_error: 0.0060 - val_loss: 0.0049 - val_acc: 0.6963 - val_mean_squared_error: 0.0049
Epoch 24/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6560 - mean_squared_error: 0.0059 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 25/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6653 - mean_squared_error: 0.0059 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 26/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6764 - mean_squared_error: 0.0058 - val_loss: 0.0048 - val_acc: 0.6963 - val_mean_squared_error: 0.0048
Epoch 27/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6735 - mean_squared_error: 0.0057 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 28/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6822 - mean_squared_error: 0.0057 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 29/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6805 - mean_squared_error: 0.0057 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 30/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6893 - mean_squared_error: 0.0055 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 31/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6852 - mean_squared_error: 0.0056 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 32/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6898 - mean_squared_error: 0.0055 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 33/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.6893 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 34/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6881 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 35/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6933 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 36/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6904 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 37/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6957 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 38/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6916 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 39/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6939 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 40/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6951 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 41/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6957 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 42/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6998 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 43/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6974 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 44/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6998 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 45/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.6974 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 46/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.6986 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 47/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.6980 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 48/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.6992 - mean_squared_error: 0.0050 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 49/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.6974 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 50/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7004 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 51/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7021 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 52/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7044 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 53/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7027 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 54/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7039 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 55/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0050 - acc: 0.7027 - mean_squared_error: 0.0050 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 56/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7015 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 57/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7033 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 58/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.6986 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 59/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7056 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 60/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7050 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 61/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7050 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 62/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7039 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 63/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7033 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 64/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7050 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 65/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7056 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 66/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7056 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 67/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7056 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 68/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7062 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 69/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7056 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 70/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7050 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 71/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7079 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 72/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7085 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 73/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7068 - mean_squared_error: 0.0048 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 74/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 75/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 76/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7085 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 77/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 78/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 79/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 80/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 81/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7085 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 82/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 83/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7079 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 84/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 85/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7074 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 86/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7068 - mean_squared_error: 0.0047 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 87/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 88/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 89/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 90/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 91/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 92/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 93/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 94/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 95/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 96/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 97/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 98/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 99/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 100/100
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Training complete, saving model as:  sgd_nesterov.h5

Step 7: Visualize the Loss and Test Predictions

(IMPLEMENTATION) Answer a few questions and visualize the loss

Question 1: Outline the steps you took to get to your final neural network architecture and your reasoning at each step.

CNNs are the state of the art neural networks for identifying images and portions / artefacts within images. I referred to the provided blog to understand the data pipeline. Based on this, my approach was to try creating a model that was progressively improving the accuracy and reducing the loss.

1) Started with a basic model that had 3 convolutional layers interleaved with max-pooling layers.

  • Used a kernel size of 3x3 and a 'same' padding.
  • Max pooling layer of size 2
  • Each Max pooling layer reduced the size of width and height by half.
  • Used the 'ReLU' non-linearity at each layer.
  • Final layer does not have a non-linearity since the model is a regression model. (predict continous values and not discrete labelling)

2) Augmented the model by adding one more convolutional layer prior to the max pooling layer.

  • So it has 2 convolutional layers followed by a Max pooling layer.

3) Added regularization by introducing dropouts. This helped in reducing the overfitting.

  • After this, noticed that there is less overfitting and the validation loss was reducing in a similar way as the training loss. Same for validation accuracy as compared to the training accuracy.

4) Also experimented with various levels of dropouts at various layers.

  • Progressively increased the dropout rate at higher layers, finally to have the dropout as high as 55% at the dense layer.

5) Based on these, found that the networks started training and there was a significant improvement in the validation accuracy as well as reduction in the loss.
Tried to add "BatchNormalization" at each layer. This helped in smoothing the learning process, but overall the network stopped learning after sometime and the accuracy tapered off at 70%.
Batch norm is usually useful for avoiding covariate shift. But in our case, the data may not have as much covariate shift since the location of the facial keypoints does not change with age or other factors. It remains more or less in the same proportions, so we can use that intuition for not having Batch norm.

Question 2: Defend your choice of optimizer. Which optimizers did you test, and how did you determine which worked best?

I tried all possible optimizer that Keras supports. Went with the values recommended by Keras documentation. Trained with the same Model for all of the optimizers and plotted both the loss and accuracy.

From the plot, we can see that the "Adamax" and "Nadam" were the best performers.

Noted that the best performance was from the "Adamax" optimizer for a 100 Epoch training. Was able to achieve a loss of 0.000942 and an accuracy of 0.794393 (79%) Hence, opted for the "Adamax" as the chosen algorithm. The next step was to train this even further to see if we could achieve a better accuracy.

The final traning yielded the following results:

  • Chosen optimizer - Adamax
  • Model - 2Conv -> MaxPool -> 2Conv -> MaxPool -> 2Conv -> MaxPool -> 2Conv -> MaxPool -> Dense(512) -> Dense(30)
  • Epochs 300
  • loss: 0.00088264
  • acc: 0.8049
  • mean_squared_error: 0.00088264
  • val_loss: 0.00084909
  • val_acc: 0.8154
  • val_mean_squared_error: 0.00084909

  • As per help-text given, "A very good model will achieve about 0.0015 loss "

  • I was able to achieve a validation loss of 0.00088246

Use the code cell below to plot the training and validation loss of your neural network. You may find this resource useful.

In [6]:
## TODO: Visualize the training and validation loss of your neural network
# summarize history for accuracy
def plot_loss(optimizer_names):
    legend_list = []
    for optimizer_name in optimizer_names:
        hist = history_dict[optimizer_name]
        #plt.plot(hist.history['loss'])
        plt.plot(hist.history['val_loss'])
        legend_list.append(optimizer_name)
    plt.legend(legend_list, loc='upper right')
    plt.title('model loss')
    plt.ylabel('loss')
    plt.xlabel('epoch')

plot_loss(history_dict.keys())
fig_size = [20,20]
plt.rcParams["figure.figsize"] = fig_size
plt.show()
plt.gcf().clear()
plt.clf()
plt.cla()
plt.close()

#Now print the best performing optimizer details.
final_best_loss = 100
final_best_optimizer_name = None
final_worst_loss = -100
final_worst_optimizer_name = None
for optimizer_name in history_dict.keys():
    loss = history_dict[optimizer_name].history['val_loss'][-1]
    print("The loss of optimizer %s is %f " %(optimizer_name, loss))
    if loss < final_best_loss:
        final_best_loss = loss
        final_best_optimizer_name = optimizer_name
    if loss > final_worst_loss:
        final_worst_loss = loss
        final_worst_optimizer_name = optimizer_name

print("The best performing optimizer is %s and the final loss is %f " % (final_best_optimizer_name, final_best_loss))
print("The worst performing optimizer is %s and the final loss is %f " % (final_worst_optimizer_name, final_worst_loss))
The loss of optimizer nadam is 0.001619 
The loss of optimizer adam is 0.004404 
The loss of optimizer adamax is 0.000942 
The loss of optimizer sdg is 0.004399 
The loss of optimizer adagrad is 0.004511 
The loss of optimizer rmsprop is 0.004387 
The loss of optimizer adam_amsgrad is 0.004374 
The loss of optimizer sgd_nesterov is 0.004402 
The loss of optimizer adadelta is 0.004127 
The best performing optimizer is adamax and the final loss is 0.000942 
The worst performing optimizer is adagrad and the final loss is 0.004511 
In [7]:
def plot_accuracy(optimizer_names):
    for optimizer_name in optimizer_names:
        hist = history_dict[optimizer_name]
        #plt.plot(hist.history['loss'])
        plt.plot(hist.history['val_acc'])
        plt.legend(['test'], loc='upper left')
    plt.title('model accuracy')
    plt.ylabel('accuracy')
    plt.xlabel('epoch')

plot_accuracy(history_dict.keys())
fig_size = [10,10]
plt.rcParams["figure.figsize"] = fig_size
plt.show()
plt.gcf().clear()
plt.clf()
plt.cla()
plt.close()

#Now print the best performing optimizer details.
final_best_accuracy = -100
final_best_optimizer_name = None
final_worst_accuracy = 100
final_worst_optimizer_name = None
for optimizer_name in history_dict.keys():
    accuracy = history_dict[optimizer_name].history['val_acc'][-1]
    print("The accuracy of optimizer %s is %f " %(optimizer_name, accuracy))
    if accuracy > final_best_accuracy:
        final_best_accuracy = accuracy
        final_best_optimizer_name = optimizer_name
    if accuracy < final_worst_accuracy:
        final_worst_accuracy = accuracy
        final_worst_optimizer_name = optimizer_name

print("The best performing optimizer is %s and the final accuracy is %f " % (final_best_optimizer_name, final_best_accuracy))
print("The worst performing optimizer is %s and the final loss is %f " % (final_worst_optimizer_name, final_worst_accuracy))
The accuracy of optimizer nadam is 0.735981 
The accuracy of optimizer adam is 0.696262 
The accuracy of optimizer adamax is 0.794393 
The accuracy of optimizer sdg is 0.696262 
The accuracy of optimizer adagrad is 0.696262 
The accuracy of optimizer rmsprop is 0.696262 
The accuracy of optimizer adam_amsgrad is 0.696262 
The accuracy of optimizer sgd_nesterov is 0.696262 
The accuracy of optimizer adadelta is 0.696262 
The best performing optimizer is adamax and the final accuracy is 0.794393 
The worst performing optimizer is adam and the final loss is 0.696262 
In [22]:
from keras.optimizers import Adamax

batch_size = 32
epochs=300
loss='mse'
metrics=['accuracy','mse']
val_split=0.2
verbose=1
shuffle=True

model = get_model()
#Define the adamax optimizer
adamax = Adamax(lr=0.002, beta_1=0.9, beta_2=0.999)
model.compile(loss=loss, optimizer=adamax, metrics=metrics)

hist = model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, 
            validation_split=val_split, verbose=verbose, shuffle=shuffle)

## Save the model as model.h5
print("Training complete, saving model as: ", 'final_model.h5')
model.save('final_model.h5')
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_37 (Conv2D)           (None, 96, 96, 16)        160       
_________________________________________________________________
conv2d_38 (Conv2D)           (None, 95, 95, 16)        1040      
_________________________________________________________________
max_pooling2d_16 (MaxPooling (None, 47, 47, 16)        0         
_________________________________________________________________
dropout_19 (Dropout)         (None, 47, 47, 16)        0         
_________________________________________________________________
conv2d_39 (Conv2D)           (None, 47, 47, 32)        4640      
_________________________________________________________________
conv2d_40 (Conv2D)           (None, 46, 46, 32)        4128      
_________________________________________________________________
max_pooling2d_17 (MaxPooling (None, 23, 23, 32)        0         
_________________________________________________________________
dropout_20 (Dropout)         (None, 23, 23, 32)        0         
_________________________________________________________________
conv2d_41 (Conv2D)           (None, 23, 23, 64)        18496     
_________________________________________________________________
conv2d_42 (Conv2D)           (None, 22, 22, 64)        16448     
_________________________________________________________________
max_pooling2d_18 (MaxPooling (None, 11, 11, 64)        0         
_________________________________________________________________
dropout_21 (Dropout)         (None, 11, 11, 64)        0         
_________________________________________________________________
conv2d_43 (Conv2D)           (None, 11, 11, 128)       73856     
_________________________________________________________________
conv2d_44 (Conv2D)           (None, 10, 10, 128)       65664     
_________________________________________________________________
max_pooling2d_19 (MaxPooling (None, 5, 5, 128)         0         
_________________________________________________________________
dropout_22 (Dropout)         (None, 5, 5, 128)         0         
_________________________________________________________________
conv2d_45 (Conv2D)           (None, 5, 5, 256)         295168    
_________________________________________________________________
conv2d_46 (Conv2D)           (None, 4, 4, 256)         262400    
_________________________________________________________________
max_pooling2d_20 (MaxPooling (None, 2, 2, 256)         0         
_________________________________________________________________
dropout_23 (Dropout)         (None, 2, 2, 256)         0         
_________________________________________________________________
flatten_4 (Flatten)          (None, 1024)              0         
_________________________________________________________________
dense_7 (Dense)              (None, 512)               524800    
_________________________________________________________________
dropout_24 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_8 (Dense)              (None, 30)                15390     
=================================================================
Total params: 1,282,190
Trainable params: 1,282,190
Non-trainable params: 0
_________________________________________________________________
Train on 1712 samples, validate on 428 samples
Epoch 1/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0751 - acc: 0.3224 - mean_squared_error: 0.0751 - val_loss: 0.0187 - val_acc: 0.6963 - val_mean_squared_error: 0.0187c: 0.3163 - mean_squared_err
Epoch 2/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0153 - acc: 0.4463 - mean_squared_error: 0.0153 - val_loss: 0.0098 - val_acc: 0.6963 - val_mean_squared_error: 0.0098 loss: 0.0158 - acc: 0.4338 - mean_squared
Epoch 3/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0119 - acc: 0.5070 - mean_squared_error: 0.0119 - val_loss: 0.0074 - val_acc: 0.6963 - val_mean_squared_error: 0.0074 0.0124 - acc: 0.4848 
Epoch 4/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0104 - acc: 0.5181 - mean_squared_error: 0.0104 - val_loss: 0.0060 - val_acc: 0.6963 - val_mean_squared_error: 0.0060
Epoch 5/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0094 - acc: 0.5783 - mean_squared_error: 0.0094 - val_loss: 0.0057 - val_acc: 0.6963 - val_mean_squared_error: 0.0057c: 0.5737 - mean_s
Epoch 6/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0088 - acc: 0.5607 - mean_squared_error: 0.0088 - val_loss: 0.0061 - val_acc: 0.6963 - val_mean_squared_error: 0.0061
Epoch 7/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0083 - acc: 0.5923 - mean_squared_error: 0.0083 - val_loss: 0.0050 - val_acc: 0.6963 - val_mean_squared_error: 0.0050
Epoch 8/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0080 - acc: 0.6127 - mean_squared_error: 0.0080 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 9/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0080 - acc: 0.6081 - mean_squared_error: 0.0080 - val_loss: 0.0049 - val_acc: 0.6963 - val_mean_squared_error: 0.0049
Epoch 10/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0075 - acc: 0.6285 - mean_squared_error: 0.0075 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045 - loss: 0.0075 - acc: 0.6297 - mean_squared_error: 0.00
Epoch 11/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0074 - acc: 0.6355 - mean_squared_error: 0.0074 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 12/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0073 - acc: 0.6519 - mean_squared_error: 0.0073 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 13/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0072 - acc: 0.6268 - mean_squared_error: 0.0072 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 14/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0070 - acc: 0.6665 - mean_squared_error: 0.0070 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 15/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0069 - acc: 0.6542 - mean_squared_error: 0.0069 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 16/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0065 - acc: 0.6688 - mean_squared_error: 0.0065 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044acc: 0.6595 - mean_squared_error:  - ETA: 1s - los
Epoch 17/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6560 - mean_squared_error: 0.0066 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044oss: 0.0066 - acc: 0.6500 - mean_squared
Epoch 18/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6723 - mean_squared_error: 0.0066 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 19/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0066 - acc: 0.6542 - mean_squared_error: 0.0066 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 20/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0063 - acc: 0.6764 - mean_squared_error: 0.0063 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 21/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6793 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044- acc: 0.6934 - me - ETA: 0s - loss: 0.0062 - acc: 0.6787 - mean_squared_err - ETA: 0s - loss: 0.0062 - acc: 0.6809 - mean_squared_error: 0.
Epoch 22/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6764 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044oss: 0.0063 - acc: 0.6
Epoch 23/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0061 - acc: 0.6834 - mean_squared_error: 0.0061 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 24/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0062 - acc: 0.6828 - mean_squared_error: 0.0062 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 25/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6898 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 26/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0060 - acc: 0.6840 - mean_squared_error: 0.0060 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044 0.006
Epoch 27/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6881 - mean_squared_error: 0.0059 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 28/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0059 - acc: 0.6875 - mean_squared_error: 0.0059 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 29/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0058 - acc: 0.6916 - mean_squared_error: 0.0058 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 30/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6904 - mean_squared_error: 0.0057 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 31/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.6992 - mean_squared_error: 0.0057 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 32/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6916 - mean_squared_error: 0.0056 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044oss: 0.0056 - acc: 0.6898 - mean_squared_error - ETA: 0s - loss: 0.0056 - acc: 0.6909 - mean_squared
Epoch 33/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0057 - acc: 0.7027 - mean_squared_error: 0.0057 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 34/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6992 - mean_squared_error: 0.0056 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 35/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0056 - acc: 0.6904 - mean_squared_error: 0.0056 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 36/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0054 - acc: 0.6974 - mean_squared_error: 0.0054 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 37/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0055 - acc: 0.7044 - mean_squared_error: 0.0055 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 38/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.6951 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045 0.005
Epoch 39/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7044 - mean_squared_error: 0.0053 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 40/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7027 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 41/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.6980 - mean_squared_error: 0.0052 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 42/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0052 - acc: 0.7009 - mean_squared_error: 0.0052 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 43/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0053 - acc: 0.7021 - mean_squared_error: 0.0053 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.00452 - acc: 0.6965 - 
Epoch 44/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7009 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 45/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7062 - mean_squared_error: 0.0051 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046 loss: 0.0051 - acc: 0.7113 - mean_squared_error - ETA: 0s - loss: 0.0051 - acc: 0.7102 - mean_squared_error: 0.
Epoch 46/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0051 - acc: 0.7027 - mean_squared_error: 0.0051 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 47/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7033 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.00440.00
Epoch 48/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7079 - mean_squared_error: 0.0049 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044oss: 0.0049 - ac - ETA: 0s - loss: 0.0050 - acc: 0.7085 - mean_squared_error: 0.00
Epoch 49/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7027 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045 - loss: 0.0049 - acc: 0.7105 - mean_squared_error
Epoch 50/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0049 - acc: 0.7039 - mean_squared_error: 0.0049 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 51/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7079 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 52/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0048 - acc: 0.7015 - mean_squared_error: 0.0048 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045oss: 0.0048 - acc: 0.7027 - 
Epoch 53/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7056 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045 - ETA: 0s - loss: 0.0047 - acc: 0.6977 - mean_squared
Epoch 54/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7074 - mean_squared_error: 0.0046 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045oss: 0.0048 - acc: 0.7324 - mean_squared_error:  - ETA: 1s - loss: 0.0047 - acc: 0.7250 - me - ETA: 0s - loss: 0.0047 - acc: 0.7071 - mean_squared - ETA: 0s - loss: 0.0046 - acc: 0.7071 - mean_squared_error:  - ETA: 0s - loss: 0.0046 - acc: 0.7108 - mean_squared_error:  - ETA: 0s - loss: 0.0046 - acc: 0.7059 - mean_squared_error: 
Epoch 55/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0047 - acc: 0.7056 - mean_squared_error: 0.0047 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 56/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7068 - mean_squared_error: 0.0046 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044
Epoch 57/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0046 - acc: 0.7044 - mean_squared_error: 0.0046 - val_loss: 0.0047 - val_acc: 0.6963 - val_mean_squared_error: 0.0047
Epoch 58/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0045 - acc: 0.7097 - mean_squared_error: 0.0045 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044mean_squared_err - ETA: 0s - loss: 0.0045 - acc: 0.7092 - mean_squared_error: 
Epoch 59/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7068 - mean_squared_error: 0.0044 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 60/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7021 - mean_squared_error: 0.0044 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 61/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0044 - acc: 0.7074 - mean_squared_error: 0.0044 - val_loss: 0.0046 - val_acc: 0.6963 - val_mean_squared_error: 0.0046
Epoch 62/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0043 - acc: 0.7050 - mean_squared_error: 0.0043 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 63/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7044 - mean_squared_error: 0.0042 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 64/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7091 - mean_squared_error: 0.0042 - val_loss: 0.0045 - val_acc: 0.6963 - val_mean_squared_error: 0.0045
Epoch 65/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0042 - acc: 0.7033 - mean_squared_error: 0.0042 - val_loss: 0.0044 - val_acc: 0.6963 - val_mean_squared_error: 0.0044s: 0.0040 -  - ETA: 0s - loss: 0.0041 - acc: 0.7009 - mean_squared_error: 0.00 - ETA: 0s - loss: 0.0041 - acc: 0.6984 - mean_squ
Epoch 66/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0041 - acc: 0.7044 - mean_squared_error: 0.0041 - val_loss: 0.0041 - val_acc: 0.6963 - val_mean_squared_error: 0.0041
Epoch 67/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0041 - acc: 0.7015 - mean_squared_error: 0.0041 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042an_squared_error:  - ETA: 0s - loss: 0.0041 - acc: 0.7019 - mean_s
Epoch 68/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0040 - acc: 0.7056 - mean_squared_error: 0.0040 - val_loss: 0.0042 - val_acc: 0.6963 - val_mean_squared_error: 0.0042
Epoch 69/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0039 - acc: 0.7021 - mean_squared_error: 0.0039 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043
Epoch 70/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0039 - acc: 0.7044 - mean_squared_error: 0.0039 - val_loss: 0.0043 - val_acc: 0.6963 - val_mean_squared_error: 0.0043054 - mean_s - ETA: 0s - loss: 0.0039 - acc: 0.7073 - mean_squared_e - ETA: 0s - loss: 0.0039 - acc: 0.7077 - mean_squared_err
Epoch 71/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0038 - acc: 0.7004 - mean_squared_error: 0.0038 - val_loss: 0.0039 - val_acc: 0.6963 - val_mean_squared_error: 0.0039
Epoch 72/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0037 - acc: 0.7015 - mean_squared_error: 0.0037 - val_loss: 0.0038 - val_acc: 0.6986 - val_mean_squared_error: 0.0038
Epoch 73/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0037 - acc: 0.7021 - mean_squared_error: 0.0037 - val_loss: 0.0037 - val_acc: 0.6963 - val_mean_squared_error: 0.0037
Epoch 74/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0036 - acc: 0.7039 - mean_squared_error: 0.0036 - val_loss: 0.0035 - val_acc: 0.6963 - val_mean_squared_error: 0.0035
Epoch 75/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0035 - acc: 0.7044 - mean_squared_error: 0.0035 - val_loss: 0.0034 - val_acc: 0.6963 - val_mean_squared_error: 0.0034
Epoch 76/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0034 - acc: 0.7103 - mean_squared_error: 0.0034 - val_loss: 0.0033 - val_acc: 0.6963 - val_mean_squared_error: 0.0033- acc: 0.7201 - mean_squared_error: 0.00 - ETA: 1s - loss: 0.0034 - acc: 0.7125 - me - ETA: 0s - loss: 0.0034 - acc: 0.7180 - mean
Epoch 77/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0033 - acc: 0.7097 - mean_squared_error: 0.0033 - val_loss: 0.0032 - val_acc: 0.6963 - val_mean_squared_error: 0.0032acc: 0.7153  - ETA: 0s - loss: 0.0034 - acc: 0.7137 - mean_squared_error:  - ETA: 0s - loss: 0.0033 - acc: 0.7096 - mean_s
Epoch 78/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0032 - acc: 0.7079 - mean_squared_error: 0.0032 - val_loss: 0.0030 - val_acc: 0.6963 - val_mean_squared_error: 0.0030
Epoch 79/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0031 - acc: 0.7062 - mean_squared_error: 0.0031 - val_loss: 0.0027 - val_acc: 0.6963 - val_mean_squared_error: 0.0027
Epoch 80/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0030 - acc: 0.7097 - mean_squared_error: 0.0030 - val_loss: 0.0027 - val_acc: 0.7033 - val_mean_squared_error: 0.0027
Epoch 81/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0029 - acc: 0.7109 - mean_squared_error: 0.0029 - val_loss: 0.0028 - val_acc: 0.6963 - val_mean_squared_error: 0.0028
Epoch 82/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0029 - acc: 0.7114 - mean_squared_error: 0.0029 - val_loss: 0.0024 - val_acc: 0.7009 - val_mean_squared_error: 0.0024
Epoch 83/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0028 - acc: 0.7074 - mean_squared_error: 0.0028 - val_loss: 0.0024 - val_acc: 0.7009 - val_mean_squared_error: 0.0024
Epoch 84/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0027 - acc: 0.7074 - mean_squared_error: 0.0027 - val_loss: 0.0022 - val_acc: 0.7079 - val_mean_squared_error: 0.0022
Epoch 85/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0026 - acc: 0.7185 - mean_squared_error: 0.0026 - val_loss: 0.0021 - val_acc: 0.7056 - val_mean_squared_error: 0.0021
Epoch 86/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0025 - acc: 0.7056 - mean_squared_error: 0.0025 - val_loss: 0.0020 - val_acc: 0.7173 - val_mean_squared_error: 0.0020
Epoch 87/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0025 - acc: 0.7161 - mean_squared_error: 0.0025 - val_loss: 0.0021 - val_acc: 0.7220 - val_mean_squared_error: 0.0021
Epoch 88/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7161 - mean_squared_error: 0.0024 - val_loss: 0.0022 - val_acc: 0.6963 - val_mean_squared_error: 0.00220. - ETA: 0s - loss: 0.0024 - acc: 0.7128 - mean_squared_e
Epoch 89/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7190 - mean_squared_error: 0.0024 - val_loss: 0.0020 - val_acc: 0.7126 - val_mean_squared_error: 0.0020
Epoch 90/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0024 - acc: 0.7266 - mean_squared_error: 0.0024 - val_loss: 0.0020 - val_acc: 0.7126 - val_mean_squared_error: 0.0020
Epoch 91/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0023 - acc: 0.7255 - mean_squared_error: 0.0023 - val_loss: 0.0019 - val_acc: 0.7220 - val_mean_squared_error: 0.0019TA: 0s - loss: 0.0024 - acc: 0.7139 - mean_squared
Epoch 92/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7179 - mean_squared_error: 0.0022 - val_loss: 0.0018 - val_acc: 0.7360 - val_mean_squared_error: 0.0018
Epoch 93/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7389 - mean_squared_error: 0.0022 - val_loss: 0.0017 - val_acc: 0.7150 - val_mean_squared_error: 0.0017ed_error:  - ETA: 0s - loss: 0.0022 - acc: 0.7333 - mean_squar
Epoch 94/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0022 - acc: 0.7336 - mean_squared_error: 0.0022 - val_loss: 0.0017 - val_acc: 0.7220 - val_mean_squared_error: 0.0017
Epoch 95/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7272 - mean_squared_error: 0.0021 - val_loss: 0.0018 - val_acc: 0.7500 - val_mean_squared_error: 0.0018
Epoch 96/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0021 - acc: 0.7331 - mean_squared_error: 0.0021 - val_loss: 0.0016 - val_acc: 0.7360 - val_mean_squared_error: 0.0016
Epoch 97/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7342 - mean_squared_error: 0.0020 - val_loss: 0.0015 - val_acc: 0.7453 - val_mean_squared_error: 0.0015
Epoch 98/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7360 - mean_squared_error: 0.0020 - val_loss: 0.0016 - val_acc: 0.7313 - val_mean_squared_error: 0.0016
Epoch 99/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0020 - acc: 0.7307 - mean_squared_error: 0.0020 - val_loss: 0.0015 - val_acc: 0.7570 - val_mean_squared_error: 0.0015
Epoch 100/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7342 - mean_squared_error: 0.0019 - val_loss: 0.0016 - val_acc: 0.7360 - val_mean_squared_error: 0.0016
Epoch 101/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7412 - mean_squared_error: 0.0019 - val_loss: 0.0015 - val_acc: 0.7664 - val_mean_squared_error: 0.0015 - ETA: 0s - loss: 0.0019 - acc: 0.7378 - mean_squared
Epoch 102/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7407 - mean_squared_error: 0.0019 - val_loss: 0.0015 - val_acc: 0.7523 - val_mean_squared_error: 0.0015
Epoch 103/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0018 - acc: 0.7284 - mean_squared_error: 0.0018 - val_loss: 0.0014 - val_acc: 0.7523 - val_mean_squared_error: 0.0014
Epoch 104/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7331 - mean_squared_error: 0.0019 - val_loss: 0.0014 - val_acc: 0.7523 - val_mean_squared_error: 0.00149 - acc: 0.7269 - mean_squared
Epoch 105/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0019 - acc: 0.7366 - mean_squared_error: 0.0019 - val_loss: 0.0014 - val_acc: 0.7407 - val_mean_squared_error: 0.0014
Epoch 106/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0018 - acc: 0.7348 - mean_squared_error: 0.0018 - val_loss: 0.0015 - val_acc: 0.7500 - val_mean_squared_error: 0.0015 - loss: 0.0018 - acc: 0.7259 - mean_squared_error:  - ETA: 1s - loss: 0.0018 - ac
Epoch 107/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0018 - acc: 0.7418 - mean_squared_error: 0.0018 - val_loss: 0.0014 - val_acc: 0.7640 - val_mean_squared_error: 0.0014
Epoch 108/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7541 - mean_squared_error: 0.0017 - val_loss: 0.0015 - val_acc: 0.7477 - val_mean_squared_error: 0.0015
Epoch 109/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7354 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7640 - val_mean_squared_error: 0.0013
Epoch 110/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7377 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7500 - val_mean_squared_error: 0.0013
Epoch 111/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7535 - mean_squared_error: 0.0017 - val_loss: 0.0014 - val_acc: 0.7547 - val_mean_squared_error: 0.0014
Epoch 112/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7430 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7617 - val_mean_squared_error: 0.0013
Epoch 113/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7675 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7570 - val_mean_squared_error: 0.00136 -  - ETA: 0s - loss: 0.0017 - acc: 0.7713 - mean_squar
Epoch 114/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0016 - acc: 0.7436 - mean_squared_error: 0.0016 - val_loss: 0.0013 - val_acc: 0.7407 - val_mean_squared_error: 0.0013- acc: 0.7506 - mean_squared_err
Epoch 115/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0017 - acc: 0.7477 - mean_squared_error: 0.0017 - val_loss: 0.0013 - val_acc: 0.7547 - val_mean_squared_error: 0.0013
Epoch 116/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0016 - acc: 0.7471 - mean_squared_error: 0.0016 - val_loss: 0.0013 - val_acc: 0.7570 - val_mean_squared_error: 0.0013quared_error: 
Epoch 117/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0016 - acc: 0.7582 - mean_squared_error: 0.0016 - val_loss: 0.0012 - val_acc: 0.7640 - val_mean_squared_error: 0.0012
Epoch 118/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0016 - acc: 0.7488 - mean_squared_error: 0.0016 - val_loss: 0.0012 - val_acc: 0.7710 - val_mean_squared_error: 0.0012
Epoch 119/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0016 - acc: 0.7506 - mean_squared_error: 0.0016 - val_loss: 0.0012 - val_acc: 0.7570 - val_mean_squared_error: 0.0012
Epoch 120/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7553 - mean_squared_error: 0.0015 - val_loss: 0.0013 - val_acc: 0.7407 - val_mean_squared_error: 0.0013
Epoch 121/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7541 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7570 - val_mean_squared_error: 0.0012
Epoch 122/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7588 - mean_squared_error: 0.0015 - val_loss: 0.0011 - val_acc: 0.7757 - val_mean_squared_error: 0.0011
Epoch 123/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7634 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7757 - val_mean_squared_error: 0.0012acc: 0
Epoch 124/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7564 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7664 - val_mean_squared_error: 0.0012an_squ
Epoch 125/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7576 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7687 - val_mean_squared_error: 0.0012
Epoch 126/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7669 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7664 - val_mean_squared_error: 0.0012acc: 0.7675 - mean_s
Epoch 127/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7570 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7640 - val_mean_squared_error: 0.0012
Epoch 128/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0015 - acc: 0.7477 - mean_squared_error: 0.0015 - val_loss: 0.0012 - val_acc: 0.7664 - val_mean_squared_error: 0.0012 - loss: 0.0014 - acc: 0.7768 - mean_squared_error - ETA: 2s - loss: 0.0016 - acc: 0.7578 - mean_squar - ETA: 1s - loss: 0.0016 - ac
Epoch 129/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7757 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7664 - val_mean_squared_error: 0.0011s: 0.0014 - acc: 0.8099 - mean_squared - ETA: 1s - loss: 0.0014 - acc: 0.8 - ETA: 0s - loss: 0.0014 - acc: 0.7791 - mean_squared
Epoch 130/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7611 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7617 - val_mean_squared_error: 0.0011
Epoch 131/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7576 - mean_squared_error: 0.0014 - val_loss: 0.0012 - val_acc: 0.7593 - val_mean_squared_error: 0.0012
Epoch 132/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7593 - mean_squared_error: 0.0014 - val_loss: 0.0012 - val_acc: 0.7640 - val_mean_squared_error: 0.0012
Epoch 133/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7570 - mean_squared_error: 0.0014 - val_loss: 0.0012 - val_acc: 0.7570 - val_mean_squared_error: 0.0012
Epoch 134/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7599 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7593 - val_mean_squared_error: 0.0011_squared_error:  - ETA: 0s - loss: 0.0014 - acc: 0.7594 - mean_squ
Epoch 135/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7634 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7710 - val_mean_squared_error: 0.0011
Epoch 136/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7792 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7430 - val_mean_squared_error: 0.0011
Epoch 137/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7617 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7757 - val_mean_squared_error: 0.0011- mean_squared_error:  - ETA: 0s - loss: 0.0013 - acc: 0.7759 - mean_squared_error: 0.00 - ETA: 0s - loss: 0.0013 - acc: 0.7
Epoch 138/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7576 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7710 - val_mean_squared_error: 0.0011
Epoch 139/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0014 - acc: 0.7687 - mean_squared_error: 0.0014 - val_loss: 0.0011 - val_acc: 0.7687 - val_mean_squared_error: 0.0011646 - mean_squar
Epoch 140/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7775 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7827 - val_mean_squared_error: 0.0011
Epoch 141/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7658 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7617 - val_mean_squared_error: 0.0011
Epoch 142/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7634 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7664 - val_mean_squared_error: 0.0011
Epoch 143/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7593 - mean_squared_error: 0.0013 - val_loss: 0.0010 - val_acc: 0.7804 - val_mean_squared_error: 0.0010
Epoch 144/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7780 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7570 - val_mean_squared_error: 0.0011 loss: 0.0013 - acc: 0.7406 - mean_squared_error - ETA: 1s - loss: 0.0 - ETA: 0s - loss: 0.0013 - acc: 0.7722 - mean_squar
Epoch 145/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7845 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7640 - val_mean_squared_error: 0.0011
Epoch 146/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7728 - mean_squared_error: 0.0013 - val_loss: 0.0010 - val_acc: 0.7757 - val_mean_squared_error: 0.0010
Epoch 147/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7728 - mean_squared_error: 0.0013 - val_loss: 0.0010 - val_acc: 0.7687 - val_mean_squared_error: 0.0010
Epoch 148/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7699 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7617 - val_mean_squared_error: 0.0011
Epoch 149/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7780 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7757 - val_mean_squared_error: 0.0011.7825 - mean_s - ETA: 0s - loss: 0.0013 - acc: 0.7788 - 
Epoch 150/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7658 - mean_squared_error: 0.0013 - val_loss: 0.0011 - val_acc: 0.7570 - val_mean_squared_error: 0.0011- mean_squar
Epoch 151/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0013 - acc: 0.7699 - mean_squared_error: 0.0013 - val_loss: 0.0010 - val_acc: 0.7780 - val_mean_squared_error: 0.0010
Epoch 152/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7716 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.7850 - val_mean_squared_error: 0.0010
Epoch 153/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7815 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.7850 - val_mean_squared_error: 0.0010
Epoch 154/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7798 - mean_squared_error: 0.0012 - val_loss: 0.0011 - val_acc: 0.7710 - val_mean_squared_error: 0.0011
Epoch 155/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7769 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.7780 - val_mean_squared_error: 0.0010
Epoch 156/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7734 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.7734 - val_mean_squared_error: 0.0010- acc: 0.7490 - mean_s - ETA: 0s - loss: 0.0012 - acc: 0.7634 - mean_squared_error: 0.00 - ETA: 0s - loss: 0.0012 - acc: 0.7624 - mean_squared
Epoch 157/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7763 - mean_squared_error: 0.0012 - val_loss: 9.9524e-04 - val_acc: 0.7687 - val_mean_squared_error: 9.9524e-04
Epoch 158/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7862 - mean_squared_error: 0.0012 - val_loss: 0.0011 - val_acc: 0.7804 - val_mean_squared_error: 0.0011
Epoch 159/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7757 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.7897 - val_mean_squared_error: 0.0010
Epoch 160/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7815 - mean_squared_error: 0.0012 - val_loss: 0.0011 - val_acc: 0.7687 - val_mean_squared_error: 0.0011
Epoch 161/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7850 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.7850 - val_mean_squared_error: 0.0010
Epoch 162/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7915 - mean_squared_error: 0.0012 - val_loss: 9.6539e-04 - val_acc: 0.7780 - val_mean_squared_error: 9.6539e-04
Epoch 163/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7623 - mean_squared_error: 0.0012 - val_loss: 9.8157e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.8157e-04
Epoch 164/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7903 - mean_squared_error: 0.0012 - val_loss: 0.0010 - val_acc: 0.7804 - val_mean_squared_error: 0.0010 - loss: 0.0012 - acc: - ETA: 0s - loss: 0.0012 - acc: 0.7887 - mean_s
Epoch 165/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7821 - mean_squared_error: 0.0012 - val_loss: 9.9507e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.9507e-04
Epoch 166/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7745 - mean_squared_error: 0.0012 - val_loss: 9.4730e-04 - val_acc: 0.7897 - val_mean_squared_error: 9.4730e-04
Epoch 167/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7833 - mean_squared_error: 0.0012 - val_loss: 9.7037e-04 - val_acc: 0.7734 - val_mean_squared_error: 9.7037e-04
Epoch 168/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7833 - mean_squared_error: 0.0012 - val_loss: 9.8513e-04 - val_acc: 0.7780 - val_mean_squared_error: 9.8513e-04
Epoch 169/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7804 - mean_squared_error: 0.0012 - val_loss: 9.3570e-04 - val_acc: 0.7780 - val_mean_squared_error: 9.3570e-04
Epoch 170/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7839 - mean_squared_error: 0.0012 - val_loss: 9.4019e-04 - val_acc: 0.7734 - val_mean_squared_error: 9.4019e-04
Epoch 171/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7728 - mean_squared_error: 0.0011 - val_loss: 9.9917e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.9917e-04 0.7546 - mean_squ - ETA: 0s - loss: 0.0012 - acc: 0.7652 
Epoch 172/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7769 - mean_squared_error: 0.0011 - val_loss: 9.8199e-04 - val_acc: 0.7897 - val_mean_squared_error: 9.8199e-04.0011 
Epoch 173/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7769 - mean_squared_error: 0.0012 - val_loss: 9.6617e-04 - val_acc: 0.7710 - val_mean_squared_error: 9.6617e-04
Epoch 174/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7886 - mean_squared_error: 0.0011 - val_loss: 0.0010 - val_acc: 0.7921 - val_mean_squared_error: 0.0010
Epoch 175/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7827 - mean_squared_error: 0.0012 - val_loss: 9.6462e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.6462e-04
Epoch 176/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7792 - mean_squared_error: 0.0012 - val_loss: 9.7868e-04 - val_acc: 0.7850 - val_mean_squared_error: 9.7868e-04
Epoch 177/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0012 - acc: 0.7745 - mean_squared_error: 0.0012 - val_loss: 9.4841e-04 - val_acc: 0.7804 - val_mean_squared_error: 9.4841e-04
Epoch 178/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7821 - mean_squared_error: 0.0011 - val_loss: 9.2427e-04 - val_acc: 0.7850 - val_mean_squared_error: 9.2427e-04
Epoch 179/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7751 - mean_squared_error: 0.0011 - val_loss: 9.6673e-04 - val_acc: 0.7804 - val_mean_squared_error: 9.6673e-04
Epoch 180/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7780 - mean_squared_error: 0.0011 - val_loss: 9.4551e-04 - val_acc: 0.7874 - val_mean_squared_error: 9.4551e-04
Epoch 181/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7938 - mean_squared_error: 0.0011 - val_loss: 9.3188e-04 - val_acc: 0.7991 - val_mean_squared_error: 9.3188e-04
Epoch 182/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7985 - mean_squared_error: 0.0011 - val_loss: 9.2396e-04 - val_acc: 0.7734 - val_mean_squared_error: 9.2396e-04
Epoch 183/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7845 - mean_squared_error: 0.0011 - val_loss: 9.3550e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.3550e-04
Epoch 184/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7856 - mean_squared_error: 0.0011 - val_loss: 9.4203e-04 - val_acc: 0.7780 - val_mean_squared_error: 9.4203e-04
Epoch 185/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7880 - mean_squared_error: 0.0011 - val_loss: 9.2769e-04 - val_acc: 0.7804 - val_mean_squared_error: 9.2769e-04
Epoch 186/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7926 - mean_squared_error: 0.0011 - val_loss: 9.4398e-04 - val_acc: 0.7710 - val_mean_squared_error: 9.4398e-04
Epoch 187/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7850 - mean_squared_error: 0.0011 - val_loss: 9.2632e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.2632e-04
Epoch 188/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7780 - mean_squared_error: 0.0011 - val_loss: 9.5584e-04 - val_acc: 0.7827 - val_mean_squared_error: 9.5584e-04
Epoch 189/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7850 - mean_squared_error: 0.0011 - val_loss: 9.3204e-04 - val_acc: 0.7827 - val_mean_squared_error: 9.3204e-04
Epoch 190/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7792 - mean_squared_error: 0.0011 - val_loss: 8.9872e-04 - val_acc: 0.7897 - val_mean_squared_error: 8.9872e-04
Epoch 191/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7850 - mean_squared_error: 0.0011 - val_loss: 9.4717e-04 - val_acc: 0.7734 - val_mean_squared_error: 9.4717e-0473
Epoch 192/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7897 - mean_squared_error: 0.0011 - val_loss: 8.7319e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.7319e-04
Epoch 193/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7827 - mean_squared_error: 0.0011 - val_loss: 9.0548e-04 - val_acc: 0.8014 - val_mean_squared_error: 9.0548e-04
Epoch 194/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7961 - mean_squared_error: 0.0011 - val_loss: 9.6725e-04 - val_acc: 0.7991 - val_mean_squared_error: 9.6725e-04
Epoch 195/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7973 - mean_squared_error: 0.0011 - val_loss: 8.8588e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.8588e-04
Epoch 196/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7915 - mean_squared_error: 0.0011 - val_loss: 9.3157e-04 - val_acc: 0.8084 - val_mean_squared_error: 9.3157e-04
Epoch 197/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7810 - mean_squared_error: 0.0011 - val_loss: 9.2000e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.2000e-04
Epoch 198/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7845 - mean_squared_error: 0.0011 - val_loss: 9.1435e-04 - val_acc: 0.7734 - val_mean_squared_error: 9.1435e-04
Epoch 199/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7903 - mean_squared_error: 0.0010 - val_loss: 9.2443e-04 - val_acc: 0.8014 - val_mean_squared_error: 9.2443e-04
Epoch 200/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7751 - mean_squared_error: 0.0011 - val_loss: 8.9345e-04 - val_acc: 0.7804 - val_mean_squared_error: 8.9345e-04me
Epoch 201/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7739 - mean_squared_error: 0.0011 - val_loss: 9.0621e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.0621e-04
Epoch 202/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7874 - mean_squared_error: 0.0011 - val_loss: 9.0024e-04 - val_acc: 0.8014 - val_mean_squared_error: 9.0024e-04
Epoch 203/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7979 - mean_squared_error: 0.0010 - val_loss: 8.9540e-04 - val_acc: 0.7874 - val_mean_squared_error: 8.9540e-04011 - acc: 0.7955 - mean_squar
Epoch 204/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0011 - acc: 0.7926 - mean_squared_error: 0.0011 - val_loss: 8.9518e-04 - val_acc: 0.7804 - val_mean_squared_error: 8.9518e-04c: 0.7884 - mean - ETA: 0s - loss: 0.0011 - acc: 0.7921 - mean_squared_err
Epoch 205/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7973 - mean_squared_error: 0.0010 - val_loss: 9.0123e-04 - val_acc: 0.7874 - val_mean_squared_error: 9.0123e-04ed_error: 0.
Epoch 206/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7950 - mean_squared_error: 0.0010 - val_loss: 9.2543e-04 - val_acc: 0.7874 - val_mean_squared_error: 9.2543e-04
Epoch 207/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7979 - mean_squared_error: 0.0010 - val_loss: 9.0722e-04 - val_acc: 0.7850 - val_mean_squared_error: 9.0722e-04
Epoch 208/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7944 - mean_squared_error: 0.0010 - val_loss: 8.9496e-04 - val_acc: 0.7850 - val_mean_squared_error: 8.9496e-04
Epoch 209/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7827 - mean_squared_error: 0.0010 - val_loss: 9.0737e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.0737e-04
Epoch 210/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7921 - mean_squared_error: 0.0010 - val_loss: 9.0902e-04 - val_acc: 0.8037 - val_mean_squared_error: 9.0902e-04
Epoch 211/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.8049 - mean_squared_error: 0.0010 - val_loss: 8.8909e-04 - val_acc: 0.8037 - val_mean_squared_error: 8.8909e-04
Epoch 212/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7874 - mean_squared_error: 0.0010 - val_loss: 8.8758e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.8758e-04
Epoch 213/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.8020 - mean_squared_error: 0.0010 - val_loss: 9.0046e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.0046e-04
Epoch 214/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7810 - mean_squared_error: 0.0010 - val_loss: 8.7457e-04 - val_acc: 0.7804 - val_mean_squared_error: 8.7457e-04
Epoch 215/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7967 - mean_squared_error: 0.0010 - val_loss: 9.1979e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.1979e-04rror: 
Epoch 216/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7938 - mean_squared_error: 0.0010 - val_loss: 8.7406e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.7406e-04
Epoch 217/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7886 - mean_squared_error: 0.0010 - val_loss: 8.6953e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.6953e-04
Epoch 218/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7862 - mean_squared_error: 0.0010 - val_loss: 8.7015e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.7015e-04
Epoch 219/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.8014 - mean_squared_error: 0.0010 - val_loss: 8.5879e-04 - val_acc: 0.7874 - val_mean_squared_error: 8.5879e-04
Epoch 220/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7815 - mean_squared_error: 0.0010 - val_loss: 9.1370e-04 - val_acc: 0.8224 - val_mean_squared_error: 9.1370e-04
Epoch 221/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7810 - mean_squared_error: 0.0010 - val_loss: 9.0703e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.0703e-049 - mean_squared_error: 0.00 - ETA: 0s - loss: 0.0010 - acc: 0.7875 - me
Epoch 222/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7850 - mean_squared_error: 0.0010 - val_loss: 8.9982e-04 - val_acc: 0.7757 - val_mean_squared_error: 8.9982e-04
Epoch 223/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.9044e-04 - acc: 0.8008 - mean_squared_error: 9.9044e-04 - val_loss: 8.7381e-04 - val_acc: 0.7897 - val_mean_squared_error: 8.7381e-04
Epoch 224/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7874 - mean_squared_error: 0.0010 - val_loss: 8.4209e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.4209e-04TA: 0s - loss: 0.0010 - acc: 0.7917 - mean_squared_error: 
Epoch 225/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7909 - mean_squared_error: 0.0010 - val_loss: 8.6623e-04 - val_acc: 0.7827 - val_mean_squared_error: 8.6623e-04
Epoch 226/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7961 - mean_squared_error: 0.0010 - val_loss: 9.1444e-04 - val_acc: 0.7850 - val_mean_squared_error: 9.1444e-04.0011 
Epoch 227/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.9343e-04 - acc: 0.7932 - mean_squared_error: 9.9343e-04 - val_loss: 9.0068e-04 - val_acc: 0.7757 - val_mean_squared_error: 9.0068e-04
Epoch 228/300
1712/1712 [==============================] - 3s 2ms/step - loss: 0.0010 - acc: 0.7967 - mean_squared_error: 0.0010 - val_loss: 8.6605e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.6605e-04
Epoch 229/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.9796e-04 - acc: 0.7985 - mean_squared_error: 9.9796e-04 - val_loss: 8.8053e-04 - val_acc: 0.8084 - val_mean_squared_error: 8.8053e-04 9.8961e-04 - acc: 0.8006 - mean_squared_err - ETA: 0s - loss: 9.9684e-04 - acc: 0.8043 - 
Epoch 230/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.9328e-04 - acc: 0.7985 - mean_squared_error: 9.9328e-04 - val_loss: 8.6878e-04 - val_acc: 0.8037 - val_mean_squared_error: 8.6878e-04
Epoch 231/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.7767e-04 - acc: 0.8084 - mean_squared_error: 9.7767e-04 - val_loss: 9.3217e-04 - val_acc: 0.8084 - val_mean_squared_error: 9.3217e-04
Epoch 232/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.8254e-04 - acc: 0.8049 - mean_squared_error: 9.8254e-04 - val_loss: 8.9151e-04 - val_acc: 0.7827 - val_mean_squared_error: 8.9151e-047 - mean_squared_error: 9.8545e-
Epoch 233/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6926e-04 - acc: 0.7856 - mean_squared_error: 9.6926e-04 - val_loss: 9.2712e-04 - val_acc: 0.8154 - val_mean_squared_error: 9.2712e-04
Epoch 234/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.7744e-04 - acc: 0.8061 - mean_squared_error: 9.7744e-04 - val_loss: 9.2538e-04 - val_acc: 0.7921 - val_mean_squared_error: 9.2538e-04
Epoch 235/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.8996e-04 - acc: 0.8061 - mean_squared_error: 9.8996e-04 - val_loss: 8.8185e-04 - val_acc: 0.7850 - val_mean_squared_error: 8.8185e-04
Epoch 236/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6266e-04 - acc: 0.8026 - mean_squared_error: 9.6266e-04 - val_loss: 8.9725e-04 - val_acc: 0.7921 - val_mean_squared_error: 8.9725e-04
Epoch 237/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.9549e-04 - acc: 0.8102 - mean_squared_error: 9.9549e-04 - val_loss: 9.6189e-04 - val_acc: 0.7944 - val_mean_squared_error: 9.6189e-04 9.8435e-04 - acc: 0.8
Epoch 238/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.9118e-04 - acc: 0.7944 - mean_squared_error: 9.9118e-04 - val_loss: 9.0193e-04 - val_acc: 0.7967 - val_mean_squared_error: 9.0193e-04
Epoch 239/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.7366e-04 - acc: 0.7985 - mean_squared_error: 9.7366e-04 - val_loss: 8.5753e-04 - val_acc: 0.8014 - val_mean_squared_error: 8.5753e-04180e-04 - acc:
Epoch 240/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6851e-04 - acc: 0.8037 - mean_squared_error: 9.6851e-04 - val_loss: 8.4929e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.4929e-04
Epoch 241/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.5993e-04 - acc: 0.7909 - mean_squared_error: 9.5993e-04 - val_loss: 8.4786e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.4786e-04s: 9.6442e-04 - acc: 0.7929 - mean
Epoch 242/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6617e-04 - acc: 0.7891 - mean_squared_error: 9.6617e-04 - val_loss: 8.5098e-04 - val_acc: 0.8131 - val_mean_squared_error: 8.5098e-04
Epoch 243/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6265e-04 - acc: 0.8020 - mean_squared_error: 9.6265e-04 - val_loss: 8.5241e-04 - val_acc: 0.8037 - val_mean_squared_error: 8.5241e-04
Epoch 244/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.8269e-04 - acc: 0.7856 - mean_squared_error: 9.8269e-04 - val_loss: 8.9657e-04 - val_acc: 0.8061 - val_mean_squared_error: 8.9657e-04
Epoch 245/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6880e-04 - acc: 0.7938 - mean_squared_error: 9.6880e-04 - val_loss: 8.8626e-04 - val_acc: 0.7897 - val_mean_squared_error: 8.8626e-04
Epoch 246/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.8391e-04 - acc: 0.8037 - mean_squared_error: 9.8391e-04 - val_loss: 8.1384e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.1384e-04
Epoch 247/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.4529e-04 - acc: 0.7950 - mean_squared_error: 9.4529e-04 - val_loss: 8.8721e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.8721e-04
Epoch 248/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.5251e-04 - acc: 0.8020 - mean_squared_error: 9.5251e-04 - val_loss: 8.9239e-04 - val_acc: 0.7921 - val_mean_squared_error: 8.9239e-04
Epoch 249/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.5956e-04 - acc: 0.8084 - mean_squared_error: 9.5956e-04 - val_loss: 8.5207e-04 - val_acc: 0.8037 - val_mean_squared_error: 8.5207e-04
Epoch 250/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6908e-04 - acc: 0.8014 - mean_squared_error: 9.6908e-04 - val_loss: 8.5966e-04 - val_acc: 0.7921 - val_mean_squared_error: 8.5966e-04
Epoch 251/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6958e-04 - acc: 0.7979 - mean_squared_error: 9.6958e-04 - val_loss: 8.4851e-04 - val_acc: 0.8061 - val_mean_squared_error: 8.4851e-04an_squared_error: 9. - ETA: 0s - loss: 9.6301e-04 - acc: 0.8054 - mean_squared_error: 9. - ETA: 0s - loss: 9.7713e-04 - acc: 0.8008 - mean_squar
Epoch 252/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.2977e-04 - acc: 0.8020 - mean_squared_error: 9.2977e-04 - val_loss: 9.0359e-04 - val_acc: 0.8107 - val_mean_squared_error: 9.0359e-04
Epoch 253/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6252e-04 - acc: 0.7967 - mean_squared_error: 9.6252e-04 - val_loss: 8.9199e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.9199e-04
Epoch 254/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.5965e-04 - acc: 0.8014 - mean_squared_error: 9.5965e-04 - val_loss: 8.6231e-04 - val_acc: 0.7921 - val_mean_squared_error: 8.6231e-04
Epoch 255/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.6769e-04 - acc: 0.8032 - mean_squared_error: 9.6769e-04 - val_loss: 8.8562e-04 - val_acc: 0.8131 - val_mean_squared_error: 8.8562e-04
Epoch 256/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.4505e-04 - acc: 0.8037 - mean_squared_error: 9.4505e-04 - val_loss: 8.4303e-04 - val_acc: 0.7897 - val_mean_squared_error: 8.4303e-04oss: 9.311
Epoch 257/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3822e-04 - acc: 0.8090 - mean_squared_error: 9.3822e-04 - val_loss: 8.5839e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.5839e-04 loss: 9.7060e - ETA: 0s - loss: 9.4462e-04 - acc: 0.8050 - mean_squared_error
Epoch 258/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.2909e-04 - acc: 0.7996 - mean_squared_error: 9.2909e-04 - val_loss: 8.9102e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.9102e-04
Epoch 259/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.5145e-04 - acc: 0.8014 - mean_squared_error: 9.5145e-04 - val_loss: 8.8895e-04 - val_acc: 0.8037 - val_mean_squared_error: 8.8895e-04oss: 9.2764e-04 - acc: 0.8051 - mean_s - ETA: 0s - loss: 9.4026e-04 - acc: 0.8082 - mean_squared_error: 9.40 - ETA: 0s - loss: 9.5053e-04 - acc: 0.8070 - mean_squ
Epoch 260/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.5382e-04 - acc: 0.8107 - mean_squared_error: 9.5382e-04 - val_loss: 8.8710e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.8710e-04
Epoch 261/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.4053e-04 - acc: 0.7950 - mean_squared_error: 9.4053e-04 - val_loss: 8.2361e-04 - val_acc: 0.8131 - val_mean_squared_error: 8.2361e-04s: 9.2019e-04 - acc: 0
Epoch 262/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3472e-04 - acc: 0.8183 - mean_squared_error: 9.3472e-04 - val_loss: 8.9176e-04 - val_acc: 0.8131 - val_mean_squared_error: 8.9176e-04
Epoch 263/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.1840e-04 - acc: 0.8166 - mean_squared_error: 9.1840e-04 - val_loss: 8.5631e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.5631e-04
Epoch 264/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.5056e-04 - acc: 0.8008 - mean_squared_error: 9.5056e-04 - val_loss: 8.6504e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.6504e-04117 - mean_squar - ETA: 0s - loss: 9.5445e-04 - acc: 0.8050 - mean_squared_error: 9.54
Epoch 265/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3933e-04 - acc: 0.7996 - mean_squared_error: 9.3933e-04 - val_loss: 8.7542e-04 - val_acc: 0.8201 - val_mean_squared_error: 8.7542e-04.3843e-04 - acc: 0.7756 - mean_squared_error: 9.3843 - ETA: 0s - loss: 9.4254e-04 - acc: 0.7795 - mean_squared - ETA: 0s - loss: 9.3940e-04 - acc: 0.7899 - mean_squared_error: 
Epoch 266/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.4000e-04 - acc: 0.8002 - mean_squared_error: 9.4000e-04 - val_loss: 9.0623e-04 - val_acc: 0.8084 - val_mean_squared_error: 9.0623e-04
Epoch 267/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.4502e-04 - acc: 0.8055 - mean_squared_error: 9.4502e-04 - val_loss: 8.3337e-04 - val_acc: 0.8061 - val_mean_squared_error: 8.3337e-04.3122e-04 - acc: 0
Epoch 268/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.2557e-04 - acc: 0.7903 - mean_squared_error: 9.2557e-04 - val_loss: 8.5329e-04 - val_acc: 0.8131 - val_mean_squared_error: 8.5329e-04
Epoch 269/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.1944e-04 - acc: 0.7950 - mean_squared_error: 9.1944e-04 - val_loss: 8.0677e-04 - val_acc: 0.8014 - val_mean_squared_error: 8.0677e-04
Epoch 270/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3270e-04 - acc: 0.8067 - mean_squared_error: 9.3270e-04 - val_loss: 8.4526e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.4526e-04
Epoch 271/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.2456e-04 - acc: 0.8166 - mean_squared_error: 9.2456e-04 - val_loss: 8.3949e-04 - val_acc: 0.8224 - val_mean_squared_error: 8.3949e-04
Epoch 272/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3053e-04 - acc: 0.8084 - mean_squared_error: 9.3053e-04 - val_loss: 8.8636e-04 - val_acc: 0.8131 - val_mean_squared_error: 8.8636e-04
Epoch 273/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3856e-04 - acc: 0.8026 - mean_squared_error: 9.3856e-04 - val_loss: 8.2553e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.2553e-04
Epoch 274/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.2782e-04 - acc: 0.8096 - mean_squared_error: 9.2782e-04 - val_loss: 8.9475e-04 - val_acc: 0.8061 - val_mean_squared_error: 8.9475e-04
Epoch 275/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.2407e-04 - acc: 0.7903 - mean_squared_error: 9.2407e-04 - val_loss: 8.6194e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.6194e-04.3606e-04 - acc: 0.7693 - mean_s - ETA: 0s - loss: 9.3190e-04 - acc: 0.7722 - mean_squared_err - ETA: 0s - loss: 9.2415e-04 - acc: 0.7852 - mean_squared_error: 9.2415e- - ETA: 0s - loss: 9.2153e-04 - acc: 0.7845 - mean_squared_error: 9.21 - ETA: 0s - loss: 9.2012e-04 - acc: 0.7873 - mean_squared_error: 9.2012
Epoch 276/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.1659e-04 - acc: 0.7973 - mean_squared_error: 9.1659e-04 - val_loss: 8.6549e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.6549e-04
Epoch 277/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0638e-04 - acc: 0.8014 - mean_squared_error: 9.0638e-04 - val_loss: 8.3907e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.3907e-04
Epoch 278/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0823e-04 - acc: 0.7909 - mean_squared_error: 9.0823e-04 - val_loss: 8.7519e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.7519e-04
Epoch 279/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.9989e-04 - acc: 0.8143 - mean_squared_error: 8.9989e-04 - val_loss: 8.5886e-04 - val_acc: 0.8084 - val_mean_squared_error: 8.5886e-04-04 - acc: 0.8156 - mean_squar
Epoch 280/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3558e-04 - acc: 0.8026 - mean_squared_error: 9.3558e-04 - val_loss: 8.4450e-04 - val_acc: 0.8084 - val_mean_squared_error: 8.4450e-04oss: 9.0
Epoch 281/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.1335e-04 - acc: 0.8148 - mean_squared_error: 9.1335e-04 - val_loss: 8.6935e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.6935e-04
Epoch 282/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0308e-04 - acc: 0.8224 - mean_squared_error: 9.0308e-04 - val_loss: 8.4928e-04 - val_acc: 0.8224 - val_mean_squared_error: 8.4928e-04
Epoch 283/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.3769e-04 - acc: 0.7979 - mean_squared_error: 9.3769e-04 - val_loss: 8.5190e-04 - val_acc: 0.8084 - val_mean_squared_error: 8.5190e-04
Epoch 284/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0989e-04 - acc: 0.8055 - mean_squared_error: 9.0989e-04 - val_loss: 8.5961e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.5961e-04.1995e-04 - acc: 0.8003 - mean_squared_error - ETA: 0s - loss: 9.1013e-04 - acc: 0.8042 - mean_squared_error: 9.1013e-
Epoch 285/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0106e-04 - acc: 0.8119 - mean_squared_error: 9.0106e-04 - val_loss: 8.5937e-04 - val_acc: 0.7921 - val_mean_squared_error: 8.5937e-04
Epoch 286/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.2536e-04 - acc: 0.8102 - mean_squared_error: 9.2536e-04 - val_loss: 8.7551e-04 - val_acc: 0.8061 - val_mean_squared_error: 8.7551e-047e-04 - acc: 0
Epoch 287/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.9550e-04 - acc: 0.8201 - mean_squared_error: 8.9550e-04 - val_loss: 8.3711e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.3711e-04
Epoch 288/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.9559e-04 - acc: 0.8178 - mean_squared_error: 8.9559e-04 - val_loss: 8.6183e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.6183e-04
Epoch 289/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.8064e-04 - acc: 0.8043 - mean_squared_error: 8.8064e-04 - val_loss: 8.4678e-04 - val_acc: 0.7944 - val_mean_squared_error: 8.4678e-04
Epoch 290/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0586e-04 - acc: 0.8125 - mean_squared_error: 9.0586e-04 - val_loss: 8.1618e-04 - val_acc: 0.8107 - val_mean_squared_error: 8.1618e-04
Epoch 291/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.1555e-04 - acc: 0.7874 - mean_squared_error: 9.1555e-04 - val_loss: 8.2372e-04 - val_acc: 0.8201 - val_mean_squared_error: 8.2372e-04ed_error: 9.1644e-
Epoch 292/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.1696e-04 - acc: 0.8265 - mean_squared_error: 9.1696e-04 - val_loss: 8.5186e-04 - val_acc: 0.8154 - val_mean_squared_error: 8.5186e-04
Epoch 293/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.9444e-04 - acc: 0.8166 - mean_squared_error: 8.9444e-04 - val_loss: 8.5064e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.5064e-04
Epoch 294/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.9097e-04 - acc: 0.8102 - mean_squared_error: 8.9097e-04 - val_loss: 8.4676e-04 - val_acc: 0.8154 - val_mean_squared_error: 8.4676e-04
Epoch 295/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.8597e-04 - acc: 0.7961 - mean_squared_error: 8.8597e-04 - val_loss: 8.5190e-04 - val_acc: 0.8037 - val_mean_squared_error: 8.5190e-04
Epoch 296/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0364e-04 - acc: 0.8055 - mean_squared_error: 9.0364e-04 - val_loss: 8.3475e-04 - val_acc: 0.7967 - val_mean_squared_error: 8.3475e-04
Epoch 297/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.9226e-04 - acc: 0.7967 - mean_squared_error: 8.9226e-04 - val_loss: 8.6033e-04 - val_acc: 0.8084 - val_mean_squared_error: 8.6033e-04
Epoch 298/300
1712/1712 [==============================] - 3s 2ms/step - loss: 9.0374e-04 - acc: 0.8008 - mean_squared_error: 9.0374e-04 - val_loss: 8.3902e-04 - val_acc: 0.8084 - val_mean_squared_error: 8.3902e-04
Epoch 299/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.9450e-04 - acc: 0.8020 - mean_squared_error: 8.9450e-04 - val_loss: 8.3901e-04 - val_acc: 0.7991 - val_mean_squared_error: 8.3901e-04
Epoch 300/300
1712/1712 [==============================] - 3s 2ms/step - loss: 8.8264e-04 - acc: 0.8049 - mean_squared_error: 8.8264e-04 - val_loss: 8.4909e-04 - val_acc: 0.8154 - val_mean_squared_error: 8.4909e-04oss: 8.8225e-04 - acc: 0.8155 - mean_squared_err - ETA: 0s - loss: 8.7842e-04 - acc: 0.8069 - mean_squared_error: 8.
Training complete, saving model as:  final_model.h5
In [23]:
#Plot the training and validation loss of the final model (adamax)
legend_list = []
plt.plot(hist.history['val_loss'])
plt.plot(hist.history['loss'])
legend_list.append('Validation loss')
legend_list.append('Training loss')
plt.legend(legend_list, loc='upper right')
plt.title('Final model loss')
plt.ylabel('loss')
plt.xlabel('epoch')

fig_size = [20,20]
plt.rcParams["figure.figsize"] = fig_size
plt.show()
plt.gcf().clear()
plt.clf()
plt.cla()
plt.close()
In [24]:
#Plot the training and validation accuracy of the final model (adamax)
legend_list = []
plt.plot(hist.history['val_acc'])
plt.plot(hist.history['acc'])
legend_list.append('Validation accuracy')
legend_list.append('Training accuracy')
plt.legend(legend_list, loc='upper right')
plt.title('Final model accuracy')
plt.ylabel('loss')
plt.xlabel('epoch')

fig_size = [20,20]
plt.rcParams["figure.figsize"] = fig_size
plt.show()
plt.gcf().clear()
plt.clf()
plt.cla()
plt.close()

Question 3: Do you notice any evidence of overfitting or underfitting in the above plot? If so, what steps have you taken to improve your model? Note that slight overfitting or underfitting will not hurt your chances of a successful submission, as long as you have attempted some solutions towards improving your model (such as regularization, dropout, increased/decreased number of layers, etc).

  • Observed underfitting/no-training happening when I used less number of layers.
  • Increased the number of convolutional layers.
  • Noticed that there was overfitting happening where the training loss accuracy was increasing, but the cross validation was either tapering off or was reducing.
  • As a result, added dropouts at each layer. Increased dropouts in the initial layers based on training results.
  • Finally did some sampling tests on the final "test set" and manually compared the results. The results were good and the facial keypoints were placed reasonably well on all the images.

Improvements planned (possibly after submission)

  • Work on the dataset that has only certain keypoints. Try to include these in the data set.
  • Try to introduce image augmentation to see if we can introduce additional skews of images (not sure if this might help in this particular case since all photos seem to be front facing with less skews, even the test sets. There are a few though like the one in the test sample set below (3rd row, 1st col), which might benefit from this.
  • Try advanced non-linearities like PReLU / Leaky ReLU.

Visualize a Subset of the Test Predictions

Execute the code cell below to visualize your model's predicted keypoints on a subset of the testing images.

In [42]:
from keras.models import load_model

print("Loading the trained final model from disk into memory")
model = load_model('final_model.h5')

y_test = model.predict(X_test)
fig = plt.figure(figsize=(20,20))
fig.subplots_adjust(left=0, right=1, bottom=0, top=1, hspace=0.05, wspace=0.05)
for i in range(9):
    ax = fig.add_subplot(3, 3, i + 1, xticks=[], yticks=[])
    plot_data(X_test[i], y_test[i], ax)
Loading the trained final model from disk into memory

Step 8: Complete the pipeline

With the work you did in Sections 1 and 2 of this notebook, along with your freshly trained facial keypoint detector, you can now complete the full pipeline. That is given a color image containing a person or persons you can now

  • Detect the faces in this image automatically using OpenCV
  • Predict the facial keypoints in each face detected in the image
  • Paint predicted keypoints on each face detected

In this Subsection you will do just this!

(IMPLEMENTATION) Facial Keypoints Detector

Use the OpenCV face detection functionality you built in previous Sections to expand the functionality of your keypoints detector to color images with arbitrary size. Your function should perform the following steps

  1. Accept a color image.
  2. Convert the image to grayscale.
  3. Detect and crop the face contained in the image.
  4. Locate the facial keypoints in the cropped image.
  5. Overlay the facial keypoints in the original (color, uncropped) image.

Note: step 4 can be the trickiest because remember your convolutional network is only trained to detect facial keypoints in $96 \times 96$ grayscale images where each pixel was normalized to lie in the interval $[0,1]$, and remember that each facial keypoint was normalized during training to the interval $[-1,1]$. This means - practically speaking - to paint detected keypoints onto a test face you need to perform this same pre-processing to your candidate face - that is after detecting it you should resize it to $96 \times 96$ and normalize its values before feeding it into your facial keypoint detector. To be shown correctly on the original image the output keypoints from your detector then need to be shifted and re-normalized from the interval $[-1,1]$ to the width and height of your detected face.

When complete you should be able to produce example images like the one below

In [43]:
from keras.models import load_model

# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')

#Load the model.
model = load_model('final_model.h5')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)

image_copy = np.copy(image)

# plot our image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('image copy')
ax1.imshow(image_copy, cmap='gray')
Out[43]:
<matplotlib.image.AxesImage at 0x5cb6d208>
In [44]:
### Use the face detection code we saw in Section 1 with your trained conv-net 
##  Paint the predicted keypoints on the test image

def get_keypoints(img, model):
    # Detect faces using the haarcascade.
    face_cascade = cv2.CascadeClassifier('detector_architectures/haarcascade_frontalface_default.xml')
    # Detect the faces in image
    faces = face_cascade.detectMultiScale(img, 1.30, 10)

    # Get the bounding box for each detected face

    locations = []
    for (x,y,w,h) in faces:
        cropped_img = img[y:y+h, x:x+w]
        # Add a red bounding box to the detections image
        #cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0), 2)
    
        #Create a grayscale image out of the cropped image.
        reshaped_img = np.copy(cropped_img)
        reshaped_img = cv2.cvtColor(reshaped_img, cv2.COLOR_RGB2GRAY)
        reshaped_img = cv2.resize(reshaped_img, (96,96))    
    
        #Reshape the image so that the model can consume it.
        reshaped_img = np.copy(reshaped_img)
        reshaped_img = np.vstack(reshaped_img) / 255.
        reshaped_img = reshaped_img.astype(np.float32)
        reshaped_img = reshaped_img.reshape(-1, 96, 96, 1)
    
        #Predict the facial keypoints using our trained model.
        reshaped_locations = model.predict(reshaped_img)
    
        #Undo the normalization of the keypoints so that it can fit in the original image.
        reshaped_locations = reshaped_locations * 48 + 48
    
        #Rescale the images according to the size of the cropped image
        cropped_locations = reshaped_locations * (cropped_img.shape[0]/96)
        cropped_locations = np.around(cropped_locations)
        locations.append(cropped_locations)
        
    return locations, faces

detected_image = np.copy(image)
locations, faces = get_keypoints(detected_image, model)

for count in range(len(locations)):
    # Add a red bounding box to the detections image
    (x,y,w,h) = faces[count]
    cropped_img = image_copy[y:y+h, x:x+w]
    cv2.rectangle(image_copy, (x,y), (x+w,y+h), (255,0,0), 3)

    # For each keypoint pair, draw a circle in the cropped image (color)
    location = locations[count]
    for j in range(0, 30, 2):
        cv2.circle(cropped_img, ( location[0][j], location[0][j+1] ), 1, (0,255,0), 2)
    
    #Merge the cropped image with keypoints drawn, into the original image.
    detected_image[y:y+h, x:x+w] = cropped_img

#Paint the original image with faces and keypoints detected.
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('keypoints detected image')
ax1.imshow(detected_image, cmap='gray')
Out[44]:
<matplotlib.image.AxesImage at 0x5cdb75f8>

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add facial keypoint detection to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for keypoint detection and marking in the previous exercise and you should be good to go!

In [45]:
import cv2
import time 
from keras.models import load_model
def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # Try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # keep video stream open
    while rval:
        
        frame_copy = np.copy(frame)
        locations, faces = get_keypoints(frame_copy, model)

        for count in range(len(locations)):
        # Add a red bounding box to the detections image
            (x,y,w,h) = faces[count]
            cropped_img = frame_copy[y:y+h, x:x+w]
            cv2.rectangle(frame_copy, (x,y), (x+w,y+h), (255,0,0), 2)

            # For each keypoint pair, draw a circle in the cropped image (color)
            location = locations[count]
            for j in range(0, 30, 2):
                cv2.circle(cropped_img, ( location[0][j], location[0][j+1] ), 1, (0,255,0), 2)
    
            #Merge the cropped image with keypoints drawn, into the original image.
            frame_copy[y:y+h, x:x+w] = cropped_img
        
        # plot image from camera with detections marked
        cv2.imshow("Detecting facial keypoints", frame_copy)
        
        # exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key == ord('0'): # exit by pressing any key
            # destroy windows
            cv2.destroyAllWindows()
            
            # hack from stack overflow for making sure window closes on osx --> https://stackoverflow.com/questions/6116564/destroywindow-does-not-close-window-on-mac-using-python-and-opencv
            for i in range (1,5):
                cv2.waitKey(1)
            return frame_copy
        
        # read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()  
In [48]:
# Run your keypoint face painter
snapped_img = laptop_camera_go()
In [51]:
snapped_img = cv2.cvtColor(snapped_img, cv2.COLOR_BGR2RGB)
# plot the snapped image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Keypoints, different facial expression !')
ax1.imshow(snapped_img, cmap='gray')
Out[51]:
<matplotlib.image.AxesImage at 0x60919470>

(Optional) Further Directions - add a filter using facial keypoints

Using your freshly minted facial keypoint detector pipeline you can now do things like add fun filters to a person's face automatically. In this optional exercise you can play around with adding sunglasses automatically to each individual's face in an image as shown in a demonstration image below.

To produce this effect an image of a pair of sunglasses shown in the Python cell below.

In [52]:
# Load in sunglasses image - note the usage of the special option
# cv2.IMREAD_UNCHANGED, this option is used because the sunglasses 
# image has a 4th channel that allows us to control how transparent each pixel in the image is
sunglasses = cv2.imread("images/sunglasses_4.png", cv2.IMREAD_UNCHANGED)

# Plot the image
fig = plt.figure(figsize = (6,6))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.imshow(sunglasses)
ax1.axis('off');

This image is placed over each individual's face using the detected eye points to determine the location of the sunglasses, and eyebrow points to determine the size that the sunglasses should be for each person (one could also use the nose point to determine this).

Notice that this image actually has 4 channels, not just 3.

In [53]:
# Print out the shape of the sunglasses image
print ('The sunglasses image has shape: ' + str(np.shape(sunglasses)))
The sunglasses image has shape: (1123, 3064, 4)

It has the usual red, blue, and green channels any color image has, with the 4th channel representing the transparency level of each pixel in the image. Here's how the transparency channel works: the lower the value, the more transparent the pixel will become. The lower bound (completely transparent) is zero here, so any pixels set to 0 will not be seen.

This is how we can place this image of sunglasses on someone's face and still see the area around of their face where the sunglasses lie - because these pixels in the sunglasses image have been made completely transparent.

Lets check out the alpha channel of our sunglasses image in the next Python cell. Note because many of the pixels near the boundary are transparent we'll need to explicitly print out non-zero values if we want to see them.

In [54]:
# Print out the sunglasses transparency (alpha) channel
alpha_channel = sunglasses[:,:,3]
print ('the alpha channel here looks like')
print (alpha_channel)

# Just to double check that there are indeed non-zero values
# Let's find and print out every value greater than zero
values = np.where(alpha_channel != 0)
print ('\n the non-zero values of the alpha channel look like')
print (values)
the alpha channel here looks like
[[0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 ..., 
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]
 [0 0 0 ..., 0 0 0]]

 the non-zero values of the alpha channel look like
(array([  17,   17,   17, ..., 1109, 1109, 1109], dtype=int64), array([ 687,  688,  689, ..., 2376, 2377, 2378], dtype=int64))

This means that when we place this sunglasses image on top of another image, we can use the transparency channel as a filter to tell us which pixels to overlay on a new image (only the non-transparent ones with values greater than zero).

One last thing: it's helpful to understand which keypoint belongs to the eyes, mouth, etc. So, in the image below, we also display the index of each facial keypoint directly on the image so that you can tell which keypoints are for the eyes, eyebrows, etc.

With this information, you're well on your way to completing this filtering task! See if you can place the sunglasses automatically on the individuals in the image loaded in / shown in the next Python cell.

In [55]:
# Load in color image for face detection
image = cv2.imread('images/obamas4.jpg')

# Convert the image to RGB colorspace
image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)


# Plot the image
fig = plt.figure(figsize = (8,8))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Original Image')
ax1.imshow(image)
Out[55]:
<matplotlib.image.AxesImage at 0x60c596d8>
In [56]:
## Use the face detection code we saw in Section 1 with your trained conv-net to put
#

# This article was a reference point the overlay mechanism.
# https://stackoverflow.com/questions/40895785/using-opencv-to-overlay-transparent-image-onto-another-image
def overlay(bg_img, fg_img):
    #Separate overlay into RGB image and the mask
    fg_bgr = fg_img[:,:,:3]
    fg_mask = fg_img[:,:,3:]/255.0
    bg_bgr = bg_img
    
    #Returned image is a sum of fg and bg images
    #These are in turn multiplied by masks so as to apply the mask.
    return bg_bgr*(1.0-fg_mask) + fg_bgr*fg_mask
    
def paint_sunglasses(img, model, sunglass_image):
    ## sunglasses on the individuals in our test image
    img_copy = np.copy(img)
    locations, faces = get_keypoints(img_copy, model)

    for count in range(len(locations)):
        # Find a bounding box to the detections image
        (x,y,w,h) = faces[count]
        cropped_img = img_copy[y:y+h, x:x+w]
        location = locations[count]
    
        # The points are 9 and 3, so in our array, indices (6,7) and (18,19)
        # These are the top left and the bottom right corners.
        top_left_idx, bottom_right_idx = 18, 6
    
        # Find the co-ordinates of these two. 
        # Shifting the y value by 10 pts to cover the eye-brows.
        shift_factor = 5
        x1 = int(location[0][18]) - shift_factor
        y1 = int(location[0][18+1]) - shift_factor
        x2 = int(location[0][6]) - shift_factor
        y2 = int(location[0][6+1]) - shift_factor
    
        #Adjust the values by a Mux factor to fit the imaage
        width_amplifier = 1.3
        height_amplifier = 3.4
        w_, h_ = int((x2-x1)*width_amplifier), int((y2-y1)*height_amplifier)
    
        # Fit the spectacle within this rectangle
        sg_copy = np.copy(sunglass_image)
        sg_copy = cv2.resize(sg_copy, (w_,h_))
        overlay(cropped_img[y1:y1+h_, x1:x1+w_], sg_copy)
        cropped_img[y1:y1+h_, x1:x1+w_] = overlay(cropped_img[y1:y1+h_, x1:x1+w_], sg_copy) 
    
        #Merge the cropped image with keypoints drawn, into the original image.
        img_copy[y:y+h, x:x+w] = cropped_img
    return img_copy

image_copy = paint_sunglasses(image, model, sunglasses)

#Paint the original image with faces and keypoints detected.
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Detect keypoints and overlay an image.')
ax1.imshow(image_copy, cmap='gray')
Out[56]:
<matplotlib.image.AxesImage at 0x60c810f0>

(Optional) Further Directions - add a filter using facial keypoints to your laptop camera

Now you can add the sunglasses filter to your laptop camera - as illustrated in the gif below.

The next Python cell contains the basic laptop video camera function used in the previous optional video exercises. Combine it with the functionality you developed for adding sunglasses to someone's face in the previous optional exercise and you should be good to go!

In [57]:
import cv2
import time 
from keras.models import load_model
import numpy as np

def laptop_camera_go():
    # Create instance of video capturer
    cv2.namedWindow("face detection activated")
    vc = cv2.VideoCapture(0)

    # try to get the first frame
    if vc.isOpened(): 
        rval, frame = vc.read()
    else:
        rval = False
    
    # Keep video stream open
    while rval:
        # Paint sunglasses on the image.
        frame_modified = paint_sunglasses(frame, model, sunglasses)
        
        # Plot image from camera with detections marked
        cv2.imshow("Image with cool sunglasses!", frame_modified)
        
        # Exit functionality - press any key to exit laptop video
        key = cv2.waitKey(20)
        if key == ord('0'): # exit by pressing any key
            # Destroy windows 
            cv2.destroyAllWindows()
            
            for i in range (1,5):
                cv2.waitKey(1)
            return frame_modified
        
        # Read next frame
        time.sleep(0.05)             # control framerate for computation - default 20 frames per sec
        rval, frame = vc.read()    
        
In [58]:
# Load facial landmark detector model
model = load_model('final_model.h5')

# Run sunglasses painter
snapped_img = laptop_camera_go()
In [59]:
snapped_img = cv2.cvtColor(snapped_img, cv2.COLOR_BGR2RGB)
# plot the snapped image
fig = plt.figure(figsize = (9,9))
ax1 = fig.add_subplot(111)
ax1.set_xticks([])
ax1.set_yticks([])
ax1.set_title('Image with cool sunglasses !')
ax1.imshow(snapped_img, cmap='gray')
Out[59]:
<matplotlib.image.AxesImage at 0x62294c50>
In [ ]: